On May 13th and 14th I attended the second Writing About Testing peer conference in Durango, CO. Chris McMahon had been such a pleasant host. The overall theme was about new frontiers in testing. The atmosphere was very inspiring considering all the writing energy in the room as well as the terrain outside. Here are my impressions from the talks.
We started with a writing exercise. Chris challenged us to come up with an article in a classic introduction-3 paragraphs content-conclusion style. I paired up with Lanette Creamer. We discussed four different topics, and picked one, and sketched out the content for an article on when TDD helps you, and when you can go without it. We refined it during the two days, but we will need some more work on it before getting it published. I think we put in some good work there.
Personals in Writing about Testing
Shmuel Gershon held the first talk on personas, and how they can help you with in testing, and especially in writing about testing. After defining personas, he challenged our thoughts with some of the following questions
- When do they read?
- Where do they read?
- How do they read? Which medium?
- On which device are they reading?
- Where does the person get the reference from?
- What does the person do with your material next?
- What’s the context of the reader? What else does he read? Which other references does she use? Which books? Which blogs? Articles?
- What type of experience does the person have?
- How much does the person read?
- What does she find funny or offensive?
- What does the person bother?
- How well does the person know you?
- How does the person participate with you?
- Does the person like you? Doesn’t like you? Indifferent?
- Purpose? Expectations?
I find personas really thought-provoking – especially when you get stuck with a particular piece. Framing the audience of your writing can help you overcome some problems.
Agile Testing Ninjas
Lanette held a talk on Agile Testing Ninjas. It was a reaction to Adam Goucher’s Agile Testing Pirates. She referred to different cults in software testing as the different schools of thought we currently consider in testing. She referred to stealth, vision, skill over tools, teamwork, and practice as the five different ways of ninjas and testers. She reminded us to avoid “all it takes is a black pajama” thinking, questionable tools, flaunting a belt from a bathrobe, infamy over team objective and no posing, please. As a final thought, she mentioned that only a ninja can sneak upon another ninja.
I found her talk really funny. If you get the chance to attend it, do it. Now.
Zeger van Hese presented on Artful Testing. Testers can learn and benefit from the arts by
- Thoughtfully looking at it.
- Critical theory and the tools of art critics
- From artists (and how they look at the world)
He mentioned different characteristics of art: Sensory Anchoring, Instant Access, Personal Engagement, Wide-spectrum cognition, Multiconnectedness. Referring to the Multiple Intelligences by David Perkins, Sternberg, Gardner, he explained that our intelligence works to 90% based upon experience, and 10% based upon reflection.
For watching at art, and testing, he pointed out several parallels. Looking and testing both need time. As a heuristic for slowing testing down, he referred to session-based testing. Some people treat testing like painting by numbers. All it takes to test well is to fill out the numbers in the color. (You may note that it does not say anything about the order in painting by numbers.) For out-of-the-box-thinking he presented the formula:
2+7-118 = 129
and challenged us with coming up with one line to make it a valid formula.
One thing I found particular interesting form his presentation was the proposal of different tours based on artist styles. Take the cubism, or surrealism tour the next time you get a new software product. Mystification and Demystification and binary opposites gave me something to think about.
It was a rather long presentation, and I think there could be a cut in the middle, and it would still work perfectly – resulting in two different presentations. One key take-away: your testing is different when you’re in the right mood.
Marlena Compton talked about conversations, and that testers are particularly challenged by that. The easiest choice is often even the meanest choice. She encouraged us to break out of the asshole patterns. Stakes are high, and opinions vary, especially if you end up with strong opinions. We came up with examples for that like the bug vs. not a bug discussion, bug severity, works on my machine syndrome, and no user would do that speech. Marlena asked us to come up with an exercise about what we want in a particular situation, and what we don’t want. Then we were asked to construct a message out of it by combining both with an “and”. Unfortunately I missed to take a picture of the flipchart for the “it’s a feature” discussion we had. The outcome for my group was, that we don’t want the customer to be exposed to a bug, and we want to have clarity about a bug. This was a nice exercise, but I’m a bit unsure about the outcome, though.
After that we went into an emotional writing exercise. We were asked to imagine a current situation with high emotions, and put the feeling directly into our writing. It was a nice reflective exercise for me. This concluded the first day.
Zeger and myself had a problem picking up the renting car. on Saturday morning. That’s why we turned in later to the conference. It was unfortunate, since I wanted to facilitate the day together with Lisa. We ended up with Lisa facilitating the morning, and me facilitating the afternoon. I missed the warm-up exercise, and the first writing exercise.
Over the course of the two days, Lanette shared some of her testing games with us. We went through testing an exercise with swquisworld pen tops, and on Saturday morning the group was just concluding her straw game. In the afternoon we were asked to come up with test ideas for a new key on a computer keyboard. I loved the creative stuff Lanette came up, and feel challenged to come up with some more exploration in gift shops for such testing games for my classes.
Customer focused test design
Alan Page started his talk on customer focused test design with the observation that he seemed to do the same stuff over and over again at conferences: How to bring test automation to your team? He showed four areas which he would like to investigate deeper into: Scenarios, Testing in Production, Shifts, and what he called Inverted Testing. Chris contributed that at SocialText they had a REST architecture, resulting in a complete feedback system in place once they needed it. Regarding inverted testing Alan said that we should consider turning the testing around, and start with *ilities testing. By putting more emphasis on it first, we could avoid architecture traps that constrain the performance or the usability of your application. Alan mentioned that the bug curve is wrong. Some bugs don’t cost much, even late into the project. Others are exponentially harder to fix later. this goes mostly for bugs found through *ilities testing. As an example he told the story of their VoIp system, which is put under reliability test so intense, that you can turn off the net, count to three, turn the net back on, and your previous VoIP call is still working.
Alan explained that we need to work on mind shifts. For a functional tester it is ok to miss a bug, but we should learn from it. Mind maps for test design ideas can help to see and communicate more efficiently what we are doing to others, especially business people.
Risk taking behavior in climbing
Matt Brandt talked about parallels between testing and climbing. Matt discussed the differences between what a professional probably would do, and just taking the risk on software projects. For climbing on the way down most accidents happen. Professional climbers lay out their route into a 2000 foot wall by putting in a 5 foot section with very well calculated risk. Regarding different motivations for climbers, Matt pointer out that a beginner would state that he climbed a 5′ 12” C. An expert would state that he wanted to have fun while climbing, and an expert would be interested in teaching others about the craft of climbing. This reminded me on the discussions we had a few years back on the Software Craftsmanship mailing lists. For me it’s the same motivation in software craftsmanship, software testing, and basically everything that I do: teaching others how to excel with the craft.
There was a series of lightning talks in the afternoon of the second day. Chris McMahon started with a discussion about Spritual Materialism. He pointed out that most discussions in Software Testing currently arise around “my practice being better than your practice”-thinking. Some experts are more expert than other, and taking on the white belt should be considered more often.
A second lightning talk dealt with a discussion about metrics. Testers often fall for the trap, that they provide estimates, and later find out that a deadline was based upon that. The suggestion was to turn the question around, and rather ask for the coverage of risks. Another thing was the point that velocity is a measurement rather than a goal.
Another interesting lightning talk discussed how to bring changes to the organization – especially in the light of new testing frontiers. Mandating changes does not work. Rather we should seek allies and opponents. Pilot the change with allies, and keep on wondering what the next chunk to move forward could be. Regarding opponents you will have to orchestrate the conflict. Leadership after all is disappointing people to the level they still feel comfortable with.
Sylvia Killinen talkes about Software Craftsmanship. She reflected on the Software Craftsmanship Manifesto being a seemingly natural extension of the Agile manifesto. She said that everybody involved seems to know what craftsmanship is. This holds for software, but also for more traditional crafts. She discussed personal opinion as opposed to measurable metrics. Purpose and function also apply to software. Rather than taking statements like “craftsmanship works with craftsmen” as granted, we should challenge what the particular Craftsmanship is about For software, she came up with the three elements:
- good design
- quality of execution
- performance of function
Regarding the final point, Sylvia pointed out that if the software doesn’t work, it doesn’t work. The customer in the end does not care about the beautifulness of our code. The customer though does care whether it works in first place.
Sylvia believes that the design of tools affect how good the product will be. A silversmith with a poor hammer will end up creating crappy jewelry. A software developer with a crappy IDE, well, you can make that connection yourself. For testers she pointed out that we need to be down there when the software is created. A software program can have a good design, execute greatly, but still have tremendous failures regarding their function. In order to test with craftsmanship, we should ask ourselves what the software is doing right now, rather than what the software is going to do. Sylvia called for stopping to ignore flaws by not filing bug reports immediately (rather than forgetting them later). After all, craftsmanship in software as in jewelry is highly subjective. She compared the story of a $200 diamond ring with the story of a $2000 diamond ring, being two completely different pieces of craft.
My own conclusion is that it was an awesome seminar. I started a piece together with Lanette Creamer, exchanged many thoughts with other testers, and had a great time in Durango, CO. While in the room you could nearly sense the amount of writing expertise and skill. It was such an inspiring stay in Durango with all these very experienced testers. I especially like to thank Chris McMahin for inviting me. Hope to see you again next year.