In yesterday’s European Weekend Testing session we had a discussion upcoming on whether or not to follow the given mission. The mission we gave was to generate test scenarios for a particular application. During the debriefing just one group of testers had fulfilled this mission on the letter generating a list of scenarios for testing later, the remaining two had deviated from the mission and tested the product, thereby providing meaningful feedback on the usefulness of the product itself. Jeroen Rosink already put up a blog entry on the session. He mentions that he define his own mission, and that it’s ok to do so during his spare time.
As mentioned I made my own mission also. I believe I am allowed to it because it is my free-time and I still kept to the original mission, define scenarios. for me the questions mentioned in the discussion were a bit the scenarios.
Of course, Jeroen is right, and he provided very valuable feedback in his bug reports. But what would I do at work when faced with such situation? Should I simply just test the already available application? Or should I do as I was asked? Well, it depends, of course. It depends heavily on the context, on the application, on the project manager, on the developers, on your particular skill level, maybe even on the weather conditions (nah, not really). This blog entry discusses some of these aspects I did not want to include generally in the chat yesterday.
One week earlier, Michael Bolton attended our session. He explained his course started with building a model of the application under test. Then he started to exercise the product based on that model thereby refining his own mental model of the application.
Jeroen had picked this approach. He also built a mental model of the application and went through the application with the model in order to refine it. On the other hand, Ajay Balamurugadas had build his model and translated that model into test scenarios completely.
To be clear here, both approaches are reasonable. Indeed, knowing when to use which is essential. The software engineering analogy teaches us, that we have to make decisions based on trade-offs. The trade-offs at play here are model building (thinking) as opposed to model refinement (doing). The more time I spent on thinking through the application in theory, the less time I have during a one hour session of weekend testing to actually test the application. On the other hand, the more time I spent on testing (and the more bugs I found by doing so), the less time I can spare to refine the model I initially made. Knowing the right trade-off between the two is context-dependent. I summarized this trade-off in the following graph.
In software testing there are more of these trade-offs. You can find most of them on page four of the Exploratory Testing Dynamics where exploratory testing polarities are listed. Thanks for Michael Bolton to point me out on this.
Basically, the main distinction between the two missions followed is the fact, that Ajay used all his imaginary information available to build the model for test scenarios, while Jeroen questioned the available product to provide him some feedback on the course. While, we may not have an application available to help us making informed decisions about our testing activities, the situation in the weekend testing session was constructed to have a product in place, which could be questioned.
Indeed, for software testing it is very vital to make informed decisions. The design documents, the requirements documents, the user documentation rarely satisfies our call for knowledge about the product. Interacting with the product therefore can reveal vital information about the product and its shape. Gathering information in order to refine your model of the application at hand. Of course, for very simple applications or for programs in areas where you are an expert, this may be unnecessary. Again, the contextual information around the project provides the necessary bits of information about which path to follow.
So, professionally Ajay and Jeroen did a great deal of testing. The key difference I would make at work is that I would inform my customer on the deviation from the mission. There might be legal issues, i.e. a call to follow a certain process for power plant testing for example, that asks to follow the approach on the letter. Negotiating the mission with the customer as well as proposing a different mission when your professional sense calls to do so, is essential for an outstanding software tester. Deviating from the original mission is fine with me, as long as you can deal with the Zeroth Law of Professionalism:
You should take responsibility for the outcome of every decision you make.
Since we’re more often than not the professionals when it comes to testing software, we need to inform our customer and our client about deviations to the original missions. We have the responsibility to explain our decisions and to make a clear statement that it’s unprofessional to deliver non-tested software, for example. Of course, they might overrule your decision, but by then they’re taking over the responsibility for the outcome themselves. Of course, just because you lost a fight, this does not mean, that you should give up to raise your point and lose the battle.
Last, but not least, I hope that the other participants, Vijay, Gunjan Sethi, Shruti, do not feel offended, since I just mentioned Jeroen and Ajay here. They also did a great job of testing the product, of course.
4 thoughts on “Big Test Design Up-Front”
Since the EWT testing happens in your free time all people taking part are free to disregard the mission, stop in the middle and leave or do whatever they want in their own free time. If they do follow the mission they should understand WHY they’re doing it though:
“You are moving from lovely Europe with measurements based on the metrics system to the US with imperial units. Test Converber v2.2.1 (http://www.xyntec.com/converber.htm) for usability in all the situations you may face. Report back test scenarios for usability testing”
Reading the last sentence, I immediately ask myself “why?”. Why should the tester create scenarios for usability testing. Once an explanation is given that satisfies the tester the motivation will be higher to actually do it. As in a work environment, if you give someone a task but don’t tell them why it’s important to do the task motivation won’t be high. Don’t get me wrong, I think the application is a good one to test/write scenarios against. Apart from the very good explanations that you gave in this blog about moving away from the mission for very valid reasons maybe the motivation for the tester plays a big role as well?
The last part you described, to take on responsibility for every decision that you take and to be able to defend it, regardless if it’s seen as right or wrong is what makes the difference between a tester and a test professional in my eyes.
I’m looking forward to next weeks session at which I’ll be able to take part again (fingers crossed).
Great feedback. Interestingly while thinking through whether or not the mission was accomplished, we remembered your comment from a similar earlier mission. I support your claim on the difference between a tester and a test professional. Indeed, it’s at the core of Matt Heusser’s Miagi-Do school of software testing.
Well, I would use the product to gather additional information about the product to build my model and refine it. The points where the documentation alone is weak, leaves gaps in your model. You can gather additional information by talking with the customer, by talking to the developer, or by talking to the product, if it’s available for you. The risk of wrong information from either of these sources is the same to me. This is why testing puts a reliance on critical thinking.
I might have a blog entry on STC coming up today on this topic, which dives into it in some detail. I hope Rosie puts it up, today. I will put a link on my blog, when I see it.
Interesting findings on how a model is being developed. Model building vs. Model refining. Certainly both valid and valueable. However, I personally think that it is very important to build a model from all available input sources – except the product itself. This approach is familiar to me since I’m used to define the tests for my product prior building it. Even in your given scenario, where the product is already at your fingertips, I’d avoid to ask/utilize product in first instance. I’d rather prefer to question the implementation of a feature, not it’s description, even if insufficient or rudimentary.
“The last part you described, to take on responsibility for every decision that you take and to be able to defend it, regardless if it’s seen as right or wrong is what makes the difference between a tester and a test professional in my eyes.”
Absolutely. This is why I think that the debriefing is an essential part of each session (Michael Bolton actually suggested doing a pure debrief session, which I’d really like us to do at some point.) Learning how to explain your decisions is so important, and so completely ignored in most commercial test training. (It’s also why I’m really taken with James Bach’s testing playbook idea – explicitly capturing the reasoning behind your test design decisions and making those decisions transparent to others to examine and challenge.)