Lately there is some heavy discussion on the Agile Testing mailing list which started with the question of the developer to tester ratio. Today Ron Jeffries noted a good point on a sub-discussion on the different skill sets of developers and testers:
Suppose we have some “requirement” and some test examples. Suppose that we are worried that the “requirement” is not met. What do we do? We produce another test example. While we are unconvinced that the “requirement” is met, we keep testing and coding. When we become convinced (or sufficiently confident), we stop testing and stop coding. Therefore the tests are the requirements.
After thinking a while over Ron’s quote, I come to the conclusion, that testing – even in the agile context – is what it is: the translation of business interest into the development process. This short conclusion was difficult for me to express, so let me refine the previous statement.
Leaving the agile context out for a second testing shall provide the necessary feedback that the software under test has met the right degree of requirement fulfillment. Analytical testers use coverage metrics for this topic and an Agile Tester concludes with the customer representatives on the team which are the points to measure for requirement fulfillment. Like a Fourier transformation in maths you have discrete points on your function (software). The topic a tester has to fulfill to deliver good tested software is to identify the supporting points of the function (software). If the tests are build on uninteresting points within the function, the testing effort should have been saved. This may sound a little tough, but if you just check the error behaviour, you may and most likely will miss the business relevant tests on the happy path and therefore deliver software at a very high risk.