At the TestBash in Cambridge,
Steve Green introduced an 8-layer model for Exploratory Testing.
Steve started with some quotes about Exploratory Testing that he hears from other people:
- Ad-hoc and random
- Unstructured and unplanned
- just trying to break the system
- don’t now what you’ve done
- don’t know what you haven’t done
- not repeatable
A bunch of them I encounter as well during my work.
If you take a look into other professions, exploration is a highly valued activity, Steve explained. Sir Francis Drake for example certainly had a plan and knew where he had been, still people took him very seriously. How come this is different in testing?
In Steve’s earlier days, he started with steps to test a system. They eventually broke the whole system. Identifying the steps to reproduce the bugs, and finding the root cause thereby, they could identify why it failed. That took a lot of time earlier. Over the past ten years our community came to an understanding of Exploratory Testing that is highly structured in nature. These structures includes the six building blocks:
- Inventory – what is there to test?
- Oracles – how do we know if it’s right?
- Test plan – a flexible outline of our work
- 8-layer testing model – a structured approach to exploration
- Reporting – minimal but sufficient test reporting is crucial in a management report, and we can zoom into more detailed information if necessary
- Management – ideally session-based (but I don’t dare to call it a best practice)
The 8-layer model is a framework, not a process. It helps us plan and control our testing. It also helps us to find bugs with the simplest sequence of events and the most “vanilla” data, making diagnosis and bug advocacy easier. The 8-layer model also provides a vocabulary for reporting test coverage.
Steven explained that there is an underlying paradigm. We just do things to the extend that it is useful to do so. That might mean documentation that is minimal in regards to test coverage and results.
The 8-layer testing model consists of
- Input constraint and data validation tests
- Input combination tests
- Control flow tests
- Data flow tests
- Stress tests
- Basic scenario tests
- Extended scenario tests
- Freestyle exploratory tests
(ICICCFDSBSESFE doesn’t form a Mnemonic, unfortunately.)
You can leave some of the layers. For example if your developers have earned your credibility that they are doing decent unit testing, then you can focus more on the later layers of the model.
In the first layer, we focus on input constraints and data validation tests. The TestObsessed heuristic cheat sheet provides some examples for these. Mandatory fields, maximum length of fields, as well as domain constraints like permitted character, and formatting rules are some examples that Steve mentioned. If there is a functional specification or data dictionary we can compare the actual behavior with the intended behavior. If there isn’T, we progressively build our own data dictionary. Steve explained that they once used a voice recognition software to feed data into the system. At that point they found out that they were no keyboard and mouse clicks, still the software should work.
On layer two we start looking for combinations and how things interact with each other. We test relevant combinations if we know that inputs interact with each other. We also look for non-documented interactions. Steve also pointed to the pairwise testing work from James Bach and Justin Hunter which can help come up with a minimal set of combinations for a given set of inputs.
In layer three we take a look on flows through the system. These are aimed at the business logic in a structured manner. We identify all logical paths through the system, and the data required to force the system through those paths. Generally we use very “vanilla” data to avoid triggering bugs that are not related to the logic. We use unique data in every field where possible, constructed such that we can easily tell if it is corrupted, missing or it is in the wrong place.
Layer four is very related to this. Here we test data flow through the system. It’s similar to control flows in layer three. We push data in through the front-end and identify where it goes as all the logical paths are exercised.
In layer five we stress the system. Once we have identified where all the data goes, we push the maximum possible amount through each field and look for truncation or other forms of corruption.
Steven’s sixth layer refers to basic scenario tests. Individual functions are executed in sequences that replicate basic happy path user behavior. In comparison the extended scenarios in layer seven combine multiple basic scenarios, and execute large numbers to simulate real user behavior over a longer period. That might mean that we repeat the same test many times, or the same set of scenarios in different sequences. He referred to James Whittaker’s book on Exploratory Software Testing whose tours expand on this concept in layer seven.
The eighth and final layer of the model uses “What if…” tests to investigate what the user on a system can do. We use our full knowledge of the system to do things like looking for race conditions, or multiple concurrent login on the same account, or edit URLs. He referred to “How to Break Web Software” from James Whittaker for examples on this.
Overall I think that the proposed model can help more traditional testers see the bridge between &§$%&-certification lessons and Exploratory approaches to software testing. I expect thinking testers to adapt from that soon, and find more creative ways to test. If you don’t do that as a tester, you’re probably more sticking to the dogma that Alan Richardson referred earlier today. In the end I wondered how the layers could map onto different mission types in session-based Exploratory Testing – but I leave that to the ambitious readers of my blog. :)
8 thoughts on “TestBash: An 8-layer model for Exploratory Testing”
You are right when you say that this approach is designed to help traditional and novice testers get into exploratory testing. We find that many testers cannot make the big leap directly to the teachings of writers such as Bach, Bolton, Whittaker etc. We see our model as being a starting point that testers can use while they expand their knowledge.
Unfortunately there was not time to actually demonstrate the exploratory techniques that we might typically use within the model. Maybe that would help you appreciate the value. Perhaps I also did not adequately describe how exploration can be used to verify correct functionality as well as finding bugs, so we can demonstrate what works as well as what does not work.
thanks for the comment. In my work I help testers transition to more exploratory techniques. So, I found your description an interesting take. Though most testers I encounter are already doing testing in an exploratory way that is on the level of or beyond your model. So, it might have value, but s far not for me.
I don’t seem to get emails when people reply here, so I only just saw your comment. I would welcome an introduction to any good exploratory testers that you know, because I encounter very few.
I actually think that anyone can benefit from using the model because it provides a framework for your ideas. When I started testing more than a decade ago I found lots of bugs but not in a controlled or efficient way, and I had no way to convey to people what I had and had not done. Apart from people like James Bach, I have never met anyone who did not benefit from using the model. That said, if you have something better, please share because I am very open to new ideas.
I rather suspect that you are mostly interested in finding the interesting bugs that we find at level 8 of the model, and that’s important (and fun!). But testing is not just about finding bugs – it’s also about proving that things work, and exploratory testers tend not to want to do that. The model is intended to help you do both. Unfortunately 45 minutes is nowhere near enough time to show how it works in practice.
I found this quite interesting. I work for Microsoft and we have developed this tool for exploratory testing. Please check out the link below and let me know what you think about it: http://blogs.msdn.com/b/visualstudioalm/archive/2012/03/12/getting-started-with-exploratory-testing.aspx
youalso might want to take a look unto Shmuel Gershons Rapid Reporter as well as BB Test Assistant.
I saw a Microsoft demonstration of that product. It looked good for supporting exploratory testing if you are in an VS TFS environment but we never are because all our work is done remotely from our clients.
Well, I will keep my opinions to myself, about what I think of MS “Test Manager” product :) Exploratory testing has been much maligned over the last year or so, and frequently referenced incorrectly. ET had been happening to some degree for years, though not formally recognized as a testing process, until it was given a name. Various modern methodologies acknowledge the value of ET, and the biggest battle I found, was justifying it to business. “Ad hoc” was the most popular (and most incorrect) meaning taken. Exploratory testing has been a norm in testing for some time, so it is high-time it was embraced as part of overall testing process.
There is danger of focusing on ET, as if a separate type of testing, . In the layers outlined, I can instantly see crossovers into other types of testing. Perhaps incorporating ET into testing in general is way forward, rather than trying to treat it as special case (i.e. an option, rather than a default). Once a weakness in project methodology is identified, it does seem like there is always a mass online scramble of opinions and solutions. None more so than Agile, of course. This can do more harm than good in the long-term.
But I welcome these discussions around Exploratory testing, if only serving to highlight that testing is an important and skilled discipline, not a playground.