Good software deserves active testing. The activity of testing does not stop with the work of understanding what the software does. It must be completed by the work of criticism, the work of judging. The undemanding tester fails to satisfy this requirement, probably even more than he fails to analyze and interpret. He not only makes no effort to understand; he also dismissed a product simply by putting it aside and forgetting it. Worse than faintly praising it, he damns it by giving it no critical consideration whatever.
Huh? Harsh words. Let’s discuss them in the light of testing vs. checking.
To recall about a year ago Michael Bolton came up with a series on blog entries on the difference between testing and checking. Testing involves a human mind, while checking is merely following a previously laid out plan, probably with test scripts and test cases paved all the way. Read the full series for all the fine nuances that Michael came up with, I won’t repeat them here.
It always struck me, since something seemed to be missing. Pointing others out to the difference between testing checking didn’t help much to make a difference for me. Of course project stakeholders wanted to conduct testing, or call it checking. By distinguishing between the two, I still wasn’t able to communicate what it’s all about. This morning I noticed there might a different term which captures the underlying in a better way: active vs. passive testing.
Active testing is testing in Michael Bolton’s words. You have your lights on while interacting with the software. You build a mental model of the underlying software which continues to grow and refine as your conversation with the software continues. After each step you are critically asking whether your model seems to be fulfilled, or the model needs to be adapted, or the we got a problem in the software. Your brain is continuously engaged in the testing process and helps you to come up with new ideas and test cases to fulfill. While trying to focus yourself on the mission or the charter or the thread on the larger questions at hand, you note down things you noticed which you might want to turn to later, or you follow-up on them, eventually finding and pin-pointing problems in the software. This is what I call active testing, with the human brain turned on, fully engaged all the time.
On the other hand we have passive testing, which consists of testers following a script in order to get information about the software. All the answers and next questions are laid out – as if a formula one grand prix would have been planned out to the microsecond before the race starts. Even worse if a single engine bursts, the plan will become obsolete instantly. We deal with these scripted tests for mass-testing with hundreds of students, handing them laid out procedures what and how to test. It’s cheap to get
testing capacity the illusion of testing capacity in this way, since testers do not need to engage their brains, and they don’t need to think, just read and follow-up. But capacity is merely the problem within most software projects. This is what I call passive testing. Shut down the human mind, and bang the keys.
Of course, I laid out two extreme positions here. There are good ways to conduct passive testing – like an automated test that confirms what worked once in the past. This is passive testing that can help us free some time for more active testing – if it is done well enough with software development practices in place that should go beyond the practices of your production code. If you end up with test automation that you need to maintain more than it frees your time, you are doing something substantially wrong. On the other hand having a fully documented step-by-step document on what and how to test a particular functionality in hundreds of pages is not really responding to changes when they occur in your development process. That doesn’t mean that there are always changes. Go ahead with following your rigorous test plan in case you know in advance what will happen all along your project. I wouldn’t want to work on such a boring project, though, as I would be missing some substantially challenge in first place.
Having exposed my definition and understanding for active and passive testing, let’s break down the short abstract from the beginning.
Good software deserves active testing.
This means that any software that you want to give some potential user needs active testing. Active in the sense that your testers need to engage their brains into what could have gone wrong. How could this software fail that we were not aware of before trying this thing out actually? There are lots of ways something can go wrong with the software at hand. Just consider that having a browser with a Flash page open might lead to lower battery duration. Did you ever think about that when testing your web application? Do you need to?
On the other hand the first sentence also expresses that once you decide to use passive testing, you should know that you got a mediocre product. If you had a good software, why not give it the active testing it deserves? Something not worth doing right, is definitely not worth doing right. So, I wouldn’t want to actively test a software that is failing instantly when it starts up for the first time. This is probably a great waste of my time.
The activity of testing does not stop with the work of understanding what the software does. It must be completed by the work of criticism, the work of judging.
As I wrote this I already knew that this will be controversially discussed after I publish this blog entry. Still I decided to include in there in these terms. Why? We must not stop with understanding what the software does. We have to go deeper and critically think about how things can go astray, judge whether we recognize a problem or not a problem. We have to report our findings and our information gathered from the activity of testing to the stakeholders. That is where we simply provide the information we gathered, but we have to critically gather it, and present this information. If we stop with an understanding of what the software, we will do a mediocre job. When asked about our opinion, we are absolutely allowed to state our criticism for the product at hand, and judge what we think about it. Still, the ship vs. no-ship decision will be up to someone with a different salary.
The undemanding tester fails to satisfy this requirement, probably even more than he fails to analyze and interpret. He not only makes no effort to understand; he also dismissed a product simply by putting it aside and forgetting it. Worse than faintly praising it, he damns it by giving it no critical consideration whatever.
Not demanding to understand the software sets yourself up to providing a mediocre job at testing. Not understanding what the software does, how it will be used, and how it could threaten the user, is a passive testing approach. Instead demand to get to know the product, don’t put it aside and forget about it. Try to think in terms of the future user. Do you know how the software is going to be used? It is your responsibility to know this. If you don’t know it, you’re probably providing a below-optimal job at testing, you’re undemanding. Talk to users and your customer to find out more details about. Take the job serious to give it critical consideration.
Now, just in case Benjamin Kelly has read up to this point, I’m going to denote where this idea originated for me. The first quote was taken from How to read a book, page 138, and I exchanged the notions of reading and books with the notions of testing and software. It continues to strike me how related these two activities seem to be. With the exception of the one particular portion, it seems to fit to testing by and large. While considering this I think I also found a better way to express the difference between testing – that is active testing – and checking – that is passive testing. Passive testing is “try this out, if it starts up and doesn’t break, it’s probably good”, while active testing includes “try this out, if it starts up…” “What does start up mean? To you? To the user? To the customer?”. This is how testers help to bring value to the project.