Today, I published the first set of attendees for the GATE Workshop on 1st of October in Hamburg, Germany. By name, these are
Maik Nogens, Meike Mertsch, Eusebiu Blindu, Sven Finsterwalder so far.
As we have received fewer submissions so far than we hoped, I think I need to write something about my expectations as I consider myself the content-owner of the German Agile Testing and Exploratory Workshop. What strikes me when I visit teams claiming to do Agile, I often find their teams doing either of the following:
- Exploratory Testing – applied bad, without debriefings, charters, and without the collaboration that would make it more structured, and provide product owners and managers with the information they are asking for
- Test Automation – mostly done by programmers or testers who have a strong background in programming, sometimes not even beyond unit tests on an integration level between multiple classes
As I see immense drawbacks focussing on one or the other of the two approaches, I am convinced that Agile teams can do better by using a combination of both worlds. Exploratory Testing alone might leave an Agile team with the problem, that exercising all the tests becomes a burden over time – especially when programmers lack proper unit tests. Test Automation – even with ATDD – alone ends with the drawback that for human obvious holes are left in the software.
That said, I am interested in good applications of Exploratory Testing on Agile teams, what helped them succeed, and what could help them manage their Exploratory Testing. I am also interested in Test Automation topics, how they helped Exploratory Testing gain momentum. Finally, I am also interested in talks about how to prepare the tester’s mind, and where the connection between traditional testing techniques and Agile testing techniques might be.
So far, there is a strong balance towards Exploratory Testing in the schedule. I like this to some extent, but I would also see more attendees on Test Automation, ATDD, BDD, you name it. So, if you think you have something to contribute, drop Maik or myself a line, and we may have a discussion about that. IF you’re unsure what GATE will be, read my initial blog entry on it.
Michael Bolton beat me on blogging about it. Though, I still want to throw in my pieces to the structure in Exploratory Testing debate. The source of the conversation was
On Twitter, Johan Jonasson reported today that he was about to attend a presentation called “Structured Testing vs Exploratory Testing”.
as Bolton writes. The implication of the talk is probably, that Exploratoy Testing and Structured Testing are opposite and mutually exclusive. Let’s see if this holds.
Continue reading Structured Exploratory Testing – an oxymoron?
The other day on twitter I asked
What questions do you ask during an Exploratory Testing session debriefing?
Since I didn’t any replies on this at all, I figured it’s time for me to come with some ideas on my own. Let others work doesn’t work on this one.
Continue reading Questions to ask during Debriefs
Inspired from the EuroSTAR 2010 Exploratory Test Management roundtable I had an idea, which I would like to play with a little. Since it doesn’t seem as if I will get to it too soon, I decided to put it up on my blog, and maybe get some feedback from peers and early adopters who are eager to play around with the idea, and can provide me with feedback. Now, the idea is to collaboratively come up with test charters for all the Exploratory Testing sessions on your project.
Continue reading Collaborative Test Chartering
At the final day of the EuroSTAR 2010 there was an Exploratory Test Management Roundtable facilitated by James Lyndsay.
At the roundtable we found out that the kick-off and the debriefing part of test sessions are essential. During the debrief test managers get insights in how the tester managed their time and focus. By trying to find out how much time the tester spent on preparing tests, to setup the system, and to actually test, the test manager gets to know whether test data setup or an automated installer could help to speed up the tester. Beyond this, costs and value as well as scope and benefit of testing activities are relevant. During the debriefing the test manager can find out how much value the testing brings, and whether testing costs could be reduced by reducing test missions, and bringing up new charters, or by extending them, if there are lots of areas with too few attention and focus so far.
One of the main points discussed handled the introduction of new testers to test charters and how much advice to give them. The whole discussion reminded me on situational leadership theory. In the beginning new testers need more clear advice and guidance. Over time the leader has to find the point when he can give more freedom to the tester, and less advisory is needed. So, while in the beginning the leader might pair up with the new tester, over time, the tester should be able to do more and more work on her own. At this point the leader can delegate more work to the tester.
From the discussion at the round-table I took two things that I want to investigate over the next few months. First, I would like to try out ideas to collaboratively bring up test charters. The identification of test charters reminded me largely on how Agile teams estimate their work using planning poker or Magic Estimating. Bringing up charters as a team in a collaborative environment seems to be a worthwhile idea to me, and I would love to explore this thing that I got in my mind since then further.
The second idea I got in mind, is how test automation and exploratory test session can be used in combination. How can we track test automation work and exploratory test sessions on the taskboard, and provide the feedback to the whole team? There are stories on the web from teams who track test sessions as tasks on the taskboard as well. I would love to play with some combinations, and see what works, and what doesn’t, and find out how to help teams make the trade-off more explicit.
Over the course of the past week, I noticed a name for something several Exploratory testers already do. I crossed the idea first while reviewing one of the chapters in an upcoming book on how to reduce the cost of testing. The chapter was written by Michael Kelly, and discussed Session-based Test Management or “Managing Testing based on sessions” as Carsten Feilberg recently pointed out at EuroSTAR.
The idea is simple. Challenged by Michael Bolton, I came up with the description, that Inspectional Testing is like scratching the ice from the windshield of your car in the winter. If you have lots of time, you scratch all windows free completely. If you don’t have enough time, you know that you should scratch everything free before starting to drive, but you can also select to scratch just some of the ice, so that you see enough to start driving. You risk a car crash at that time, but over time the ice will melt while driving. So, depending on the time you feel comfortable with to take for your testing activities, you start testing just enough. After the first charter, you reflect on your first time-boxed test session, and come up with additional tests as you see fit. You see clearer than before at this point in time, but you may want to dive into some topics in more depth. This is like letting the ice melt while you drive in your car.
Continue reading Inspectional Testing
Henrik Andersson spoke about Exploratory Testing Champions at the EuroSTAR conference in Copenhagen. He described how he introduced Exploratory Testing in a large company.
Continue reading EuroSTAR: Exploratory Testing Champions