Software Testing is not a commodity!

Stick in software testing long enough, and you will see enough ideas come and go to be able to sort out the ones that look promising to work, and the ones that you just hope will go away soon enough so that no manager will pay any of her attention to it. There have been quite a few in the history of software testing, and from my experience the worst things started to happen every time when someone tried to replace a skilled tester with some piece of automation – whether that particular automation was a tool-based approach or some sort of scripted testing approach. A while ago, Jerry Weinberg described the problem in the following way:

When managers don’t understand the work, they tend to reward the appearance of work. (long hours, piles of paper, …)

The tragic thing is when this also holds true for the art of discovering the information about how usable a given piece of software is.

Why do we test software?

If we were able to write software right the first time, there would be clearly no need to test it. Unfortunately we humans are way from perfect. Take for example the book I wrote mostly through 2011. 200 pages, lots of reviewing, production planning, and stuff happening in the end. And still, while reviewing the German translation, I spotted a problem in the book – clearly visible at face value. I had spend at least 2 weeks after work to go through the book once more, and get everything right. Yet, I failed to see this obvious problem.

The problem lies in our second-order ignorance: the things we don’t know that we don’t know them. These are the things of good hope, and prayers that it will work. Murphy’s Law also has a role to play here.

The very act of software testing then becomes to find out as much information about our unawareness as possible. This includes not only exercising the product, but also finding out new things about the product. Skilled testers learn more about the product and the product domain and the development team over the course of the whole product lifecycle.

Why do we repeat tests?

But how come we focus on regressions to often in our industry? It has to do with first-order ignorance. A regression problem is a bug that gets introduced a second time, although it already had been fixed in the meantime. Since we were already fully aware of the problem, the bug is no longer something that we don’t that we don’t know it. It has become something that we know now, but we don’t know whether we will know it still tomorrow. That’s why we introduce a regression check for tomorrow, so that it will remind us about the problem that we tried to avoid at this time.

Read that sentence again. Yes, it’s speculation. We speculate that we might break the software tomorrow again. With this speculation comes a whole lot of costs. We have opportunity costs for doing the test, for automating it, and with every run, we have the opportunity cost of analyzing the result (if we have to).

We wouldn’t need this if we were able to realize that a regression bug introduced in our software is an opportunity to learn what is not working in our current process that caused that bug to re-occur. Every regression bug discovered should be an invitation to start a root cause analysis and fix the underlying problem rather than deal with the symptoms.

That said, when you end up a lot of regression tests, and do not find a better way to deal with the situation than to demand more tests, you lack to realize that you should be doing something else. Or as Einstein famously once said

Trade-Offs

So, testing on the one hand is learning and providing information. But what about the human side of software development? Right, us humans are inconsistent creatures of habit as I learned from Alistair Cockburn. That means if we focus on the mere discovery of information, we are probably going to miss those opportunities to see where we were inconsistent and introduced a regression problem. If we focus on regression problems only, then we will suffer from inattentional blindness.

As I learned through Polarity Management, if you find yourself struggle from one point to another, you are likely solving the wrong problem. In this case, we should strive to find a trade-off between enough learning on one hand, and enough awareness of regression problems on the other hand.

What has all of this to do with software testing being a commodity? Software Testing is not a commodity in the sense that you can try to scale and maximize the efficiency on your project. You will have to deal with stuff that you learn in the meantime, and you will also have to deal with different things where your software development process is broken. The act of management, and the act of test management for the same reasons should therefore strive to find the right trade-off between exploration and regression prevention for their particular project context.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

5 thoughts on “Software Testing is not a commodity!”

  1. Thank for your post.
    Agree with it.
    Only thing I would add is that one motivation I got to put lots of effort into (mostly automated) regression tests is that my customers were very angry when regression happened. Somehow they could live with some features not completely OK on the first delivery, but once they find a bug and get a fix, they don’t want to see that bug ever again, or they lose confidence in us.

    1. That is one of the trade-off decisions that you have to consider in your context. Introducing regression errors is usually a mostly feared happening. However, any regression test comes with an opportunity cost. If the opportunity to not work on something else outweighs the return on investment in that automated, I would recommend not to automate the itzy-bitzy teeny-weenie stuff out of there, and instead head for the bigger problems.

      Part of the problem with that answer comes from lost confidence is hard to measure.

  2. Hi Markus,

    While I agree with your argument for the most part, I have to take issue with your idea of what regression testing is all about. It’s true that some people add test cases for past bugs to their so-called regression suites and that has always seemed redundant to me. If we only did regression testing to ensure that an old bug has not reappeared, then of course that would be silly and wasteful.

    But in 30+ years I’ve never seen an organisation for which this is the primary purpose of regression testing. Instead, they’re attempting to find new bugs that changes — whether for bug fixes or enhancements — might have introduced or exposed. The smart ones do an impact assessment on their changes to identify and target functionality that could have been affected. They might rerun tests that have previously passed, or they might do new tests, or some mix of both. The less smart ones simply run a test suite that has already confirmed on previous runs results that the stakeholders want to see. Either way, they do get benefit from the exercise, if only some level of confidence that features they care about still work the way they did before the changes.

    1. I love that example of the impact analysis. That is one other way to deal with the problems I seem to see a lot in automated tests right now. Too much reliance on a set of automated tests probably suggest that you don’t know enough about your software, and how you structured it. Again, I find this useful information regarding how the team produces code, and where possible areas for improvement might lie.

Leave a Reply to Markus Gärtner Cancel reply

Your email address will not be published. Required fields are marked *