EuroSTAR: Exploratory Test Management Roundtable

At the final day of the EuroSTAR 2010 there was an Exploratory Test Management Roundtable facilitated by James Lyndsay.

At the roundtable we found out that the kick-off and the debriefing part of test sessions are essential. During the debrief test managers get insights in how the tester managed their time and focus. By trying to find out how much time the tester spent on preparing tests, to setup the system, and to actually test, the test manager gets to know whether test data setup or an automated installer could help to speed up the tester. Beyond this, costs and value as well as scope and benefit of testing activities are relevant. During the debriefing the test manager can find out how much value the testing brings, and whether testing costs could be reduced by reducing test missions, and bringing up new charters, or by extending them, if there are lots of areas with too few attention and focus so far.

One of the main points discussed handled the introduction of new testers to test charters and how much advice to give them. The whole discussion reminded me on situational leadership theory. In the beginning new testers need more clear advice and guidance. Over time the leader has to find the point when he can give more freedom to the tester, and less advisory is needed. So, while in the beginning the leader might pair up with the new tester, over time, the tester should be able to do more and more work on her own. At this point the leader can delegate more work to the tester.

From the discussion at the round-table I took two things that I want to investigate over the next few months. First, I would like to try out ideas to collaboratively bring up test charters. The identification of test charters reminded me largely on how Agile teams estimate their work using planning poker or Magic Estimating. Bringing up charters as a team in a collaborative environment seems to be a worthwhile idea to me, and I would love to explore this thing that I got in my mind since then further.

The second idea I got in mind, is how test automation and exploratory test session can be used in combination. How can we track test automation work and exploratory test sessions on the taskboard, and provide the feedback to the whole team? There are stories on the web from teams who track test sessions as tasks on the taskboard as well. I would love to play with some combinations, and see what works, and what doesn’t, and find out how to help teams make the trade-off more explicit.