Active vs. passive testing

Good software deserves active testing. The activity of testing does not stop with the work of understanding what the software does. It must be completed by the work of criticism, the work of judging. The undemanding tester fails to satisfy this requirement, probably even more than he fails to analyze and interpret. He not only makes no effort to understand; he also dismissed a product simply by putting it aside and forgetting it. Worse than faintly praising it, he damns it by giving it no critical consideration whatever.

Huh? Harsh words. Let’s discuss them in the light of testing vs. checking.

To recall about a year ago Michael Bolton came up with a series on blog entries on the difference between testing and checking. Testing involves a human mind, while checking is merely following a previously laid out plan, probably with test scripts and test cases paved all the way. Read the full series for all the fine nuances that Michael came up with, I won’t repeat them here.

It always struck me, since something seemed to be missing. Pointing others out to the difference between testing checking didn’t help much to make a difference for me. Of course project stakeholders wanted to conduct testing, or call it checking. By distinguishing between the two, I still wasn’t able to communicate what it’s all about. This morning I noticed there might a different term which captures the underlying in a better way: active vs. passive testing.

Active testing is testing in Michael Bolton’s words. You have your lights on while interacting with the software. You build a mental model of the underlying software which continues to grow and refine as your conversation with the software continues. After each step you are critically asking whether your model seems to be fulfilled, or the model needs to be adapted, or the we got a problem in the software. Your brain is continuously engaged in the testing process and helps you to come up with new ideas and test cases to fulfill. While trying to focus yourself on the mission or the charter or the thread on the larger questions at hand, you note down things you noticed which you might want to turn to later, or you follow-up on them, eventually finding and pin-pointing problems in the software. This is what I call active testing, with the human brain turned on, fully engaged all the time.

On the other hand we have passive testing, which consists of testers following a script in order to get information about the software. All the answers and next questions are laid out – as if a formula one grand prix would have been planned out to the microsecond before the race starts. Even worse if a single engine bursts, the plan will become obsolete instantly. We deal with these scripted tests for mass-testing with hundreds of students, handing them laid out procedures what and how to test. It’s cheap to get testing capacity the illusion of testing capacity in this way, since testers do not need to engage their brains, and they don’t need to think, just read and follow-up. But capacity is merely the problem within most software projects. This is what I call passive testing. Shut down the human mind, and bang the keys.

Of course, I laid out two extreme positions here. There are good ways to conduct passive testing – like an automated test that confirms what worked once in the past. This is passive testing that can help us free some time for more active testing – if it is done well enough with software development practices in place that should go beyond the practices of your production code. If you end up with test automation that you need to maintain more than it frees your time, you are doing something substantially wrong. On the other hand having a fully documented step-by-step document on what and how to test a particular functionality in hundreds of pages is not really responding to changes when they occur in your development process. That doesn’t mean that there are always changes. Go ahead with following your rigorous test plan in case you know in advance what will happen all along your project. I wouldn’t want to work on such a boring project, though, as I would be missing some substantially challenge in first place.

Having exposed my definition and understanding for active and passive testing, let’s break down the short abstract from the beginning.

Good software deserves active testing.

This means that any software that you want to give some potential user needs active testing. Active in the sense that your testers need to engage their brains into what could have gone wrong. How could this software fail that we were not aware of before trying this thing out actually? There are lots of ways something can go wrong with the software at hand. Just consider that having a browser with a Flash page open might lead to lower battery duration. Did you ever think about that when testing your web application? Do you need to?

On the other hand the first sentence also expresses that once you decide to use passive testing, you should know that you got a mediocre product. If you had a good software, why not give it the active testing it deserves? Something not worth doing right, is definitely not worth doing right. So, I wouldn’t want to actively test a software that is failing instantly when it starts up for the first time. This is probably a great waste of my time.

The activity of testing does not stop with the work of understanding what the software does. It must be completed by the work of criticism, the work of judging.

As I wrote this I already knew that this will be controversially discussed after I publish this blog entry. Still I decided to include in there in these terms. Why? We must not stop with understanding what the software does. We have to go deeper and critically think about how things can go astray, judge whether we recognize a problem or not a problem. We have to report our findings and our information gathered from the activity of testing to the stakeholders. That is where we simply provide the information we gathered, but we have to critically gather it, and present this information. If we stop with an understanding of what the software, we will do a mediocre job. When asked about our opinion, we are absolutely allowed to state our criticism for the product at hand, and judge what we think about it. Still, the ship vs. no-ship decision will be up to someone with a different salary.

The undemanding tester fails to satisfy this requirement, probably even more than he fails to analyze and interpret. He not only makes no effort to understand; he also dismissed a product simply by putting it aside and forgetting it. Worse than faintly praising it, he damns it by giving it no critical consideration whatever.

Not demanding to understand the software sets yourself up to providing a mediocre job at testing. Not understanding what the software does, how it will be used, and how it could threaten the user, is a passive testing approach. Instead demand to get to know the product, don’t put it aside and forget about it. Try to think in terms of the future user. Do you know how the software is going to be used? It is your responsibility to know this. If you don’t know it, you’re probably providing a below-optimal job at testing, you’re undemanding. Talk to users and your customer to find out more details about. Take the job serious to give it critical consideration.

Now, just in case Benjamin Kelly has read up to this point, I’m going to denote where this idea originated for me. The first quote was taken from How to read a book, page 138, and I exchanged the notions of reading and books with the notions of testing and software. It continues to strike me how related these two activities seem to be. With the exception of the one particular portion, it seems to fit to testing by and large. While considering this I think I also found a better way to express the difference between testing – that is active testing – and checking – that is passive testing. Passive testing is “try this out, if it starts up and doesn’t break, it’s probably good”, while active testing includes “try this out, if it starts up…” “What does start up mean? To you? To the user? To the customer?”. This is how testers help to bring value to the project.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

5 thoughts on “Active vs. passive testing”

  1. Thank you for another great post Markus. Differentiating between testing and checking by using ‘active testing’ and ‘passive testing’ is really helpful. It also helps to explain why many people think that you can go on a 3/4 day training course and suddenly become a tester; they are only thinking about passive testing. Active testing encompasses as whole new set of skills that takes years to build up – and requires aptitude and specific personality traits to be begin with.

    Thanks,

    Stephen

  2. Active testing vs. Passive.

    The concept on its own didn’t resonate strongly with me originally, but you make some interesting arguments.

    Prior to reading your post, these things for me fell into three areas. Testing, Checking and being lazy. The difference between the latter two being that there is purpose behind checking and the results of checking still need to be interpreted by an active mind, whereas lazy testing is doing the minimum expected.

    I can see something in drawing parallels to active and passive reading though. There’s something in that, but for me I think I need to turn that one over in my brain a little more. Likening passive reading with test automation seems to be an analogy that has legs, though it falls down for me if you liken it to lazy or sloppy testing.

    If you’re already familiar with material, you can skim it if you so choose (and in the case of testing, automated or manual, look for differences that shouldn’t be there). It can pay to review your checks from time to time to make sure you’re still checking the right stuff, and to amend checks that have gone awry. Calling this passive testing feels okay to me. By the same token I’m totally fine with calling it checking too.

    Testing that causes more maintenance overhead than benefit, manual test execution with one’s brain disengaged, incompetent testing – none of this fits for me. It’s lazy or it’s incompetent or both. I wouldn’t call this passive testing, and I don’t think you are either, although you do seem to mention it in passing.

    As for active testing – to me, that’s testing. Call it active if you like, but making that distinction is not important to me – at least not at this point.

    I think we’re using terminology differently. You seem to be talking at quite a granular level when you refer to a good product versus a mediocre one. You might reject a build for instance if it doesn’t make it through your smoke (passive) tests. When thinking about this, I’m thinking in broader terms – looking at the tester’s own motivation, hence with your statement ‘good software deserves active testing’ I experienced cognitive dissonance. My first reaction was ‘if a job is worth doing, isn’t it worth your best effort every time?’. Now that I see where you’re coming from, I think we’re not so far apart, actually.

    I think what you’re saying about critically gathering information and presenting it is spot on. Mediocre testing presents information. Excellent testing presents insight. I’m more likely to simply call mediocre testing bad testing rather than passive.

    I like the distinction of passive testing (I didn’t think I was going to). I’m not sure where I’ll have the need to use it, but it’s a nice term to have in the testing playbook all the same. Kudos to you, sir.

  3. Thanks for a great post, Markus. Words have different meanings for people, so giving alternatives to the testing/checking, which Bolton introduced is important.

    I think it’s very important to set the context before discriminating between good and less good testing – it depends on what you’re trying to achieve. Checking/passive testing is a valuable and necessary activity in most projects and while I personally consider myself an active tester, I have worked with some excellent checkers in my career, and they have even found some important bugs which I overlooked.

    Cheers,
    Anders :-)

Leave a Reply

Your email address will not be published. Required fields are marked *