The Testing Quadrants – We got it wrong!

The Testing Quadrants continue to be a source of confusion. I heard that Brian Marick was the first to write them down after long conversations with Cem Kaner. Lisa Crispin and Janet Gregory refined them in their book on Agile Testing.

I wrote earlier about my experiences with the testing quadrants in training situations a while back. One pattern that keeps on re-occurring when I run this particular exercise is that teams new to agile software development – especially the testers – don’t know which testing techniques to put in the quadrant with the business-facing tests that shall support the team. All there is, it seems, for them is critique to the product. This kept on confusing me, since I think us testers can bring great value. Recently I had an insight motivated by Elisabeth Hendrickson’s keynote at CAST 2012.

Two weeks ago, I attended CAST 2012. Elisabeth Hendrickson held a keynote there. If you haven’t already, you probably want to watch it:

In her talk Elisabeth provided a new interpretation of the testing quadrants that looks like a promising experiment for overcoming some of the drawbacks I had with the second quadrant, the business-facing tests that shall support the team. Elisabeth proposed a re-labeling of the two columns to confirmation vs. investigation.

With the new labels the quadrants are a bit different than the ones that Brian Marick originally wrote down. On the left-hand side the confirmatory tests that are technology-facing are of course the unit tests and those class-level integration tests that developers write. These help drive the development of the project forward.

In the second quadrant now are the expectations of the business. Usually I try to express most of them in the form of executable specifications, that I can automate. And I can also imagine other business-facing expectations that I cannot automate that probably fall in here, like reliability of the software product.

In the third quadrants on the top right are tests that help to investigate risks concerning the external quality of the product. For me Exploratory Tests fall in this category, but also usability concerns, and end-to-end functionality.

The fourth quadrant then probably consists of internal qualities for the code like changeability (static code analysis comes to mind), maintainability (how high is your CRAP metric these days?), and design flaws like code smells.

If you now wonder where quality attributes like performance and load testing are in this new model, I consider them to be either part of business-facing expectations, or they are part of the investigative process of external quality attributes. I think this depends on whether they have been made explicit early enough, or we find out during our investigations that load times are too low, for example.

I think there are more things to discover in this new model. Like with any models, they can help you think in a particular direction, but please don’t try to use this as your only guide. Use this model wisely, and apply critical thinking to your results.

P.S.: If you noticed the typo in the slide deck from Elisabeth you are not the first to recognize. If you watch the video, one of the attendees of the keynote points that out, too.

  • Print
  • Digg
  • StumbleUpon
  • Facebook
  • Twitter
  • LinkedIn
  • Google Bookmarks

6 thoughts on “The Testing Quadrants – We got it wrong!”

  1. I haven’t had a chance to watch Elisabeth’s keynote yet (and now my free time is for watching the Olympics, but I’ll get to CAST videos eventually!) I like her take on the Quadrants/Matrix. Like other versions, there will still be gray areas, but that’s fine. The purpose of the Quadrants is to help us think of and plan all the different types of testing that need to be done.

    I worry that in this model, people will not realize that things like Prototyping and Wizard of Oz testing – things that can’t be automated – also go in the top-left quadrant. Otherwise, though, I think it accomplishes the goal of thinking what tests do we need to do, who should do them, and what is helped by automating.

    1. It could be worth to take a closer look on Harry Collns Tacit and Explicit Knowledge, and label the two columns as mimeomorphic tests (tests that produce the same results when executed twice) and polimorphic tests (tests that vary based on what we observe). Prototyping and Wizard of Oz testing just as Soap Opera Testing have their place in these models as well. We need to give this some more thought, but it occurred to me immediately during the talk that there is great potential in Elisabeth’s relabeling.

  2. Folks! Brilliant revisiting of the quadrants. I find the new labels express the (original) view points even more by looking at the first page above.

    I apply the 4 viewpoints often in selecting a team/ project approach, but also in testing skills. A person may have more tech skills, than business skills. etc. .. it’s a spectrum.

    In discussing testing services/deliveries I have used the angles “project support” or “product critique” – and probably will still do. But for the elaboration above – thank you all.

  3. I’m confused with the definition about exploratory testing inside the quadrants. As I see exploratory testing that this approach can be used for any techique or method. So ET can be used in any quadrant, not just third. I agree with the definition “testing = checking + exploring” but I cannot agree with the quadrants.

    1. Yes, this isn’t a problem with Marick’s original because his context is more limited, but I think the subsequent version by Crispin/Gregory confuses things a great deal.

  4. “the testers – don’t know which testing techniques to put in the quadrant” – it would be interesting to know if the testers can associate the type with the name of testing they use. This turned out to be a real problem in large and small companies and even turned out to be a challenging research topic. Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *