Testing and requirements gathering

Lately there is some heavy discussion on the Agile Testing mailing list which started with the question of the developer to tester ratio. Today Ron Jeffries noted a good point on a sub-discussion on the different skill sets of developers and testers:

Suppose we have some “requirement” and some test examples. Suppose that we are worried that the “requirement” is not met. What do we do? We produce another test example. While we are unconvinced that the “requirement” is met, we keep testing and coding. When we become convinced (or sufficiently confident), we stop testing and stop coding. Therefore the tests are the requirements.

Ron Jeffries

After thinking a while over Ron’s quote, I come to the conclusion, that testing – even in the agile context – is what it is: the translation of business interest into the development process. This short conclusion was difficult for me to express, so let me refine the previous statement.

Leaving the agile context out for a second testing shall provide the necessary feedback that the software under test has met the right degree of requirement fulfillment. Analytical testers use coverage metrics for this topic and an Agile Tester concludes with the customer representatives on the team which are the points to measure for requirement fulfillment. Like a Fourier transformation in maths you have discrete points on your function (software). The topic a tester has to fulfill to deliver good tested software is to identify the supporting points of the function (software). If the tests are build on uninteresting points within the function, the testing effort should have been saved. This may sound a little tough, but if you just check the error behaviour, you may and most likely will miss the business relevant tests on the happy path and therefore deliver software at a very high risk.

Worth to read

Bret Pettichord pointed me out to a column article by Andrew Binstock: Debunking Cyclomatic Complexity shows that cyclomatic code complexity does not correlate to the bug likelihood. Personally I found it quite worth reading since it shows how to be wrong with all the measurement activities you might think of.

James Shore made me remember the condition aspects which I first got to know during my time in school working for my Abitur. On educational science I was introduced to conditioing during my eleventh class if I remember correctly. In his article James shows perfectly how to be wrong on assumptions about rewards which I found quite worth reading with my background as a team leader.