Four months have nearly past since I started my new job at it-agile GmbH. Lots of things have happened since then. I got to know many teams, I learned lots about design, architecture, test-driven development, and also about testing. This blog entry is about the experiences I made since September in teaching ATDD, – I deliberately name it ATDD since I haven’t found a more suitable name, but I know that name should be replaced with something different – and what I plan to work on in the next year.
ATDD – that is acceptance test-driven development – is a requirements elicitation approach, which provides development teams with automated tests as a side-effect. Among others my first mission with it-agile was to develop an ATDD course – focused on practical skills, and how to apply them thereby making the transition to the daily work of the course participants as smooth as possible. That was the vision.
Aware of the fact that there is no team like another out there, we decided to develop the course in a modular way. We have two main modules, which can be taught in different depths. The first main module is the ATDD part, which teaches Specification by Example, Specification Workshops, and how to automate the examples from your specification with different frameworks. We are also going to dive into how to make these tests as maintainable as possible by applying the techniques the frameworks provide to us – and also using test-driven development for your test fixture code. The second main module handles Exploratory Testing with testing heuristics, oracles, and we are dealing with managing testing based on sessions and threads, and how to incorporate this into an Agile team and workflow. Overall this seems to be a mix that most teams might benefit from.
Subject matter experts
But there is one problem with ATDD, that is how to teach it to different teams with very different background. So far, I have seen three completely diverse teams working with ATDD. The first team consists of a subject matter experts with an out-sourced programming team. Two of my colleagues built the first test automation approach based on Cuke4Duke. After the initial delivery we left the team on their own, and found ourselves surprised when we got back. They had doubled the number of step-definitions and extended the tests we had delivered them nearly by an order of a magnitude – all of this without having dedicated programmers and testers on their teams, just subject matter experts. When I heard about this team for the first time, I was surprised that I hadn’t found their particular story in Gojko‘s upcoming book on Specificaiton By Example.
The second team consisted built software for the intensive care unit in hospitals. Based on the data that software provides doctors make life-critical decisions. The team consisted of programmers only. The first tester on their team got just hired. We worked through a one day workshop on ATDD, and how to use it. In the afternoon we took a closer look on the frameworks that support ATDD, and made our decision based upon that. In their particular context FitNesse with C# made a lot of sense. I then spent the remainder of that week working out the first test prototype, and enabling them to build more upon this from a solid starting point.
The third team actually ran a beta version of the course that we are offering starting from 2011. The teams consist of programmers with a technical background, and their testers are medical secretaries from medical practice. Thus far they tested completely manually, and the testers were sent to ISTQB certification program in order to get a basis for testing. So, teaching these testers ATDD seemed to be a real challenge to me. On top of that one of the participants was hearing impaired. We had scheduled three days. I had a plan for all three days. At the end of the first I was surprised that I had covered the material from all three days already. We then were able to continue the next two days completely just automating tests. Luckily we had one programmer in our course, one test automation programmer, and one programming-infected tester as well. At the end of the third day the participants rated the course with a return on time invested of 1.4 as can be seen in the picture of the flipchart I took later on.
During the last week I attended the release retrospective of that company. I was surprised by the influence I had taken on the testers. Some were already able to apply what they learned, while other still struggled with it lacking help from a programmer in setting up the environment for them. During the “Generate Insights” part of the retrospective, which was conducted as an Open Space split across the two days, we talked a lot about pairing testers with testers, as well as pairing testers with programmers on their teams, in order to let them make progress.
There is one magic ingredient there, that helped me to achieve teaching ATDD to the last group, and I plan to extend my usage of it over the course of the next year. Shortly before the course at the last team, Stefan Roock, a senior colleague of mine, reported from a course he just visited on training from the back of the room. I had just read about it, but found the concept worth to try out. There are six principles which foster learning in a course setup. Instead of teaching through PowerPoint, participants get involved right from the start. Since doing something is better than passively perceiving some knowledge, this helps to foster the understanding of the taught material as well. This is something I also explained in the EuroSTAR webinar Alternative Paths to self-education in software testing, and at the two presentations I held on the topic at the Agile Testing Days and EuroSTAR 2010.
That said, I started off with PowerPoint, and noticed by lunchtime that it was going to fail dramatically. So, I asked the group what they wanted to do instead. We got everyone paired up in order to specify together. We presented the results afterwards, and talked about some improvements. Then we got in front of the computers, and typed the examples in. At the end of the day we had covered decision tables in theory and practice, and the remaining tables just in theory. On the next day we shuffled the pairs, and tried to come up with examples for the other tables. We were able to automate some of them by providing hard-coded values back at the end of the day. On the third day we practiced, practiced, practiced so that everyone had a feeling for FitNesse and how to write tests.
Now, on the thoughts from training from the back of the room, I would like to explore the walking around parts in more depth. So far, having everyone pair up and discuss solutions to their automation progress seemed to the right thing to me. There were of course arguments for one approach or another, but we also tried out both, and could see advantages for one or another. One thing I would like to change is the frequency of exchanges between pair partners. Having attended a demonstration of the training from the back of the room principles recently, I noticed that I should change teams more often. I am sure I will have plenty of new insights before I get to actually read the book behind the technique. In addition I am eager to get to know how this method scales up for Exploratory Testing trainings, but I can already see parallels to Weekend Testing sessions.
Looking forward to learn more in 2011 about this.