Continuing the series of questions from the CONQUEST 2010 conference in September, we’ll take a closer look on questions regarding actually testing software.
Against which test basis do tester specify their component tests? Story cards are not sufficient in my opinion, since they are too coarse.
Story cards are a placeholder for a conversation. That said story cards are intended to be coarse, since the meaningful details about the story are discussed during the iteration together with the customer. That leaves us still with the challenge to bring some testing into the iteration. Mike Cohn explains in his book on ‘User Stories, that the product owner writes down acceptance criteria on the back of the card. These acceptance criteria then serve as a starting point for the tests that are going to be automated during the iteration. Testers will use the acceptance criteria to bring in test automation, and will extend these basic tests based on that.
In addition to automated tests, there are also Exploratory Tests executed manually before the iteration demonstration is held. The team will tackle any remaining risks with these tests, so that a user story gets tested by the previously defined acceptance criteria, as well as explored by the tests executed manually.
Beyond this simple explanation there are a couple of guides in the form of heuristics and oracles, which Agile testers use to be sure that all the relevant and meaningful tests are executed. First of all there are the testing quadrants, which Brian Marick wrote down a while ago. Additionally Mike Cohn’s testing pyramid provides guidance on which layers to test in which depth. Finally, all test heuristics, and oracles that you might learn from more traditional test projects also still apply, of course. All these together yield a foundation for Agile testing.
How can I achieve a certain “completeness” of component and system tests, or do I proof this? Pure code coverage metrics, that is white-box-coverage do not suffice.
First of all, you should ask yourself, what do you want to measure. In 2004 Cem Kaner co-wrote a paper with Peter Bond on Software Engineering Metrics: What do they measure and how do we know?. In it they explain that traditional metrics like code coverage do not help you see beyond what maybe was not written in the code in first place. So, code coverage metrics indeed help to cure myself from self-blindness when I write unit tests for my code, to see what tests I didn’t think about, but as a control element in software projects, it’s useless.
Agile projects value working software. So, instead of tackling coverage at different levels like requirements coverage, design coverage or code coverage the literature explains to use iteration demonstration to customer representatives. By showing the software to its potential users teams can find out what degree of completeness they have achieved. and what might still lay in front of them. In the first few iterations, the backlog of stories might grow as the team gets the feedback from the customer. But this should not last for too long. Instead the team should find their weaknesses during the iteration reflections or retrospectives, and adapt their work process by themselves to their findings.
In regard to regression tests often people talk about coverage. What is a common functional coverage degree of software using regression tests?
I have problems answering this question,since different people understand different things by the term “regression tests”. So, let me give an explanation from my perspective on the term first. Any test executed more than once is a regression test, intended to find regression bugs in the software under tests. Regression bugs are changes in the behavior of the software that were not intended.
So, the practice of writing a test first, before writing any line of code yields an automated regression test suite, which has the potential to cover every line of code that was written. This regression test suite detects unexpected changes to behavior therefore by design. The same holds for functional test automation introduced using the practice we call Acceptance test-driven development – or in short ATDD. Though the name might be misleading, the practice yields a regression test suite which provides a safety net for functional regression bugs. In addition, if this does not help., the team will find out, and adapt their process accordingly, so that future iterations will continue to improve the coverage regarding unexpected changes.
With regard to tests people talks about functional and non-functional requirements, as well as according test cases.
To which extent shall non-functional requirements with test cases be covered? Is that even possible?
The testing quadrants that I mentioned earlier provide guidance on the functional and non-functional area. Usability tests can provide insights to critique the product from the business perspective. Facing the technology performance and load tests may critique the product on the response times and multi user behavior of the software. Of course other tests related to the *ilities in software testing are also covered beyond the mentioned, more common tests to cover non-functional requirements.