CAST 2011: A report on the Testing Competition

Last week at CAST 2011 we were challenged by James Bach on a testing competition. While I was initially a bit reluctant to join the Miagi-Do team, the opportunity to test with all these fine folks couldn’t be missed. One of the lessons that James later taught us, is that you don’t know someone unless you have tested with her or him. So, we formed a Miagi-Do team consisting of Matt Heusser, Michael Larsen, Ajay Balamurugadas, Elena Houser, Adam Yuret, Simon Schrijver, Justin Hunter and Pete Schneider (sorry, I forgot your last name). Not all of them were Miagi-Do testers, but we kicked butt, I think. Since Matt was part of our team, we knew right from the start that we wouldn’t win any of the US-$ 1401 that James had set as a price. Here is my report on how the competition and the aftermath went.

The challenge was easy. James had a program. We had four hours to test it, and write a report. Alongside we were asked to report any bugs we found. The first trap was the conference wifi, as it prevented us from downloading the application. So, I picked to take a look into the documentation. It was a Windows only application, which pretty much disqualified my MacBook to do any testing. I ended up pairing with both Michael Larsen and Adam Yuret in the end.

While I was digging into the documentation, my teammates had organized an USB stick and shared the application with others. Although we came up quickly with this approach, we ended up wasting nearly half an hour with this setup.

Having had previous experiences with such quick sessions, I had the idea to use inspectional testing to get a grasp of the application, and split up into several teams across the table, and manage the whole process using session-based test management in short iterations. Based on my previous experiences with settings like these, I recommended 20 minutes sessions, and we went for 5 minutes of debriefing.

In the first session we got an overview of the application. In the debriefing we shared what we had seen, and decided what to go for next. This worked amazing well. Overall we ran five sessions, and eventually found fewer and fewer bugs and ideas in the last 20 minutes. This tells me that we went pretty far with our testing.

On the coordination side, the debriefing alone helped a lot. Unfortunately we had a problem getting all the bugs we found together for our final report. We had several approaches along the five sessions and along the five sub-teams that we had formed. Eventually we ended up with just putting three bug lists from the pairing stations in our final report. So, if I would conduct this again, I would strive for a higher level of coordination up-front. Everyone directly jumped into the application once he got it. It was really hard at times to go for five minutes of debriefing. With an interesting problem you can basically distract any tester it seems – a lesson I learned at Weekend Testing sessions last year.

After the third session Matt called us out into the hallway. We had so far neglected the fact that we had to submit a final report. So we got together to discuss how to continue from here. So far, two of our four hours had past. We agreed to go for two more sessions, then compile the final report – leaving a full hour to get it written.

For the report I paired with Justin Hunter. He asked me to come up with a mindmap like structure for the things we had found so far. We had recorded the areas of interest on a flip chart after each session, and then also put down what the individual pair was heading for the next time. We tried to come up with a structure where we first put down both information. Eventually we realized that James would not be interested in the process of our testing, but instead in the areas we had taken care of. So, we combined these areas together in a mindmap, and headed back for the final 20 minutes (or so) to the others. For my personal taste, having spent 40 minutes on this mindmap seemed a bit too long, but maybe was worth the process.

In the meantime our teammates had been busy getting together the final report. There was a lot of action around, and in the end we eventually achieved to submit our final report hastily putting everything together.

The next day we found out that we had failed dramatically in our final report, as both James as well as the programmer had found our statements about the product too few backed up with arguments. We could have made the final first place, if our final report would have been better in this report.

Among the things that were considered for the final rank were considerations like developer relationships, bug advocacy, and the final report. On the relationship part Matt went pretty often to the programmer. His drive and courage to talk to the programmer is a lesson I have to take for my testing work. I usually strive to talk to the programmer as often as possible, but hold back more often because I fear that I might annoy or bug (sorry for that bad word joke) them too often. Talking to the programmer is part of my job as a tester. This is a new emphasis I take with from the experience.

Having failed in such a dramatic way, Adam Yuret took the full responsibility for the failure, and asked later for a retrospective. While I don’t think it’s Adam’s fault that we failed, I appreciate the retrospective which Anne-Marie Charrett facilitated in the evening. We went through the four hours using a timeline based on an emotional curve. We then discussed ups and downs on the list. What strikes me is that consistency that we felt. The beginning was chaotic, then we went for inspectional testing and session-based test management which improved the situation dramatically. In the end the daunting task of getting the report together was felt differently by the various people attending. Some liked it, me not so much. In the end, at the end of the four hours, we all felt great for the achievements we had made. Never having worked with each other before that, we had excelled the competition. Wonderful.

I really enjoyed the competition. Knowing up-front that we were disqualified for the price, was a fun and deliberate practice setting. I still think we kicked butt, and we set ourselves up to learn from our failures. From my perspective we did the best thing we could have done. I hope to work with these people again. Really amazing.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

5 thoughts on “CAST 2011: A report on the Testing Competition”

  1. Hi Markus,

    I think what impressed me the most was your desire to learn and improve. The testing competition was only one point in time. The retrospective that you did and the learning you took away from it, will last a lifetime (hopefully!).

    Detective Colombo

  2. Nice post, Marcus! I felt as if I re-lived the experience all over again. I thought it was great teamwork under pressure and time constraints. It was nice to see the 3rd principle of the Context-Driven School at work – ‘People, working together, are the most important part of any project (project’s context)’ =)

  3. Your team did an excellent job and gained the highest score overall. It was on a small but important point that you fell short: not insulting the developer and having a warrant for your arguments.

    If you are going to argue that a product is “not worthy” then you better know what the quality standard is. Nowhere in your report did you disclose what the quality standard actually was. This is because you didn’t check in with the developer about that.

    Otherwise, I loved what you did, and so did the developer.

Leave a Reply to Anne-Maire Charrett Cancel reply

Your email address will not be published. Required fields are marked *