Creating good testing challenges

Time and again I run into an ambitious tester that presents me his (or her) latest idea for a testing challenges. “What do you think about this situation where you set out the tester to learn X?” “You could provide product foo to the tester, and have him learn X. What do you think?” Time and again, I think there is something seriously flawed with this approach to create and design testing challenges.

Closed problems

What’s wrong with the examples that I showed? Let’s work through one together. I will think of a number, and you have to find out which one it is.

Ready, ok, let’s go.

No, it’s not 42.

It’s not 666.

It’s not NaN, that’s not a number.

No, it’s even not 31276391861.

You’re there yet? Starting to feel annoyed?

What’s the problem with such testing challenges? There is one true answer to the challenge. This is called a closed problem. Closed problems come with the drawback that there is one right answer, and you will get it quickly if you understand the underlying rules of the system – and you will become annoyed slowly like boiled frogs if you don’t.

Closed problems when given out as testing challenges to apprentices, junior testers, and so on, can come with this notion, there is one true answer. The problem with that is that these are not so much related to the work of testers. When we are working for a client, we are really rarely sent out to find that one bug that the developers hid in the software. Or those ten bugs. Or twenty. The real world usually has more to do with unforeseen problems that we discover. Rikard Edgren was the first to introduce me to the idea of serendipity, and how testers can make their environments work in favor of that.

Another problem with closed problems stems from one factor I consider crucial when it comes to learning. When giving out a testing challenge to someone, it should be something not so closely related to their work, and it should be obvious that they put some of the leisure time in it. That being said, the challenge should be fun to pursue rather than dragging out the energy from the student. The student should feel engaged to solve the puzzle. If there is one right answer and you just need to sort of “get it”, this usually becomes frustrating to most folks pretty quickly because they will feel trapped. Testing challenges should be fun.

Don’t get me wrong, there should be just the right amount of “kicking someone out of the comfort zone” that does not kick them out into the land of frustration. As psychologist Csíkszentmihályi found out, flow happens when the level of challenge meets the skill-level of the student, and does not put them off into the land of anxiety, or boredom. Personally, I can shift with a closed problem from frustration to boredom pretty soon, when I don’t “get it”.

Lessons from Weekend Testing

I remember some early lessons from my time when I co-facilitated a couple of sessions on Weekend Testing. I think it was Ajay who told us that the experimented with setting out traps for the attendees, but found out quickly that such kind of traps were simply not necessary. Testers were already up to creating their own traps.

My training experience tells me that giving out more open problems where there are several “right” solutions to solve the problem, are more demanding on the trainer. You need to design those exercises well for the learning objective. You also need to debrief the activity well enough to provide everyone involved the right amount of feedback so that they can learn and grow from the experience.

That’s what we mostly did during Weekend Testing sessions. While there was one hour of engaged testing in the beginning, testers took the biggest value out of the sessions from the debriefing part. And every trap you set out in the initial challenge usually fired back with ten traps testers set for themselves. At some point we realized that we simply didn’t need the up-front thinking, if testers could do that on their own.

Lessons from Experiential Learning

Ever since I attended PSL, I am aware that experiential learning is different. It goes deeper, and you learn from it whatever you need to take away in your current situation. The learning is context-dependent on the situation of the student.

There is a pitfall involved. If the student is feeling overwhelmed by the learning that you can offer, then he will have a hard time stepping back from the challenge. That said, students needs to be safe to step back from a challenge, and the challenger should be in the position to evaluate the thin line between leaving the comfort zone, and destroying the learning environment of the student.

Also, during the experience, testers should be in a position to play their role fully. They should be able to contribute like they always contribute. In the debriefing of the activity, though, the challenger needs to build the connection from the experience towards daily work.

That is what we somehow achieved in Weekend testing sessions. People were sent out for testing on their own, and we had a debrief afterwards. People that didn’t attend the one hour debriefing later usually were one-timers, while folks engaged in the second part, came back to learn more.

Open problems

What I took away from PSL in particular is that open problems are more fun to solve, and usually reflect the complex reality of our work situation in a better way. Therefore learning is more fun, and it comes with more direct take-aways for your next day at the office.

When designing testing challenges, we should avoid challenges that are too closed, and do not fit the current situation of the student that wants to learn something. If the experience is frustrating, he will not come back to learn more, and instead lose interest in us.

Testing challenges should be open, where we need to observe testers in a similar environment than their work situation, and help them make the connection back for their take-aways. In the end, that will be lots of more work for the challenger, but it will make the student grow more while having more fun.

  • Print
  • Digg
  • StumbleUpon
  • Facebook
  • Twitter
  • LinkedIn
  • Google Bookmarks

4 thoughts on “Creating good testing challenges”

  1. Testing Challenges: A sensitive area where we could help a budding tester or kill the curiosity to some extent.
    After reading this post, I could think of Testing Interviews as one another area where similar approach would help.

    Some of them ask questions expecting the ONE right answer.
    I like it when an open question is asked and a discussion follows.
    Thinking of other scenarios where OPEN questions would help…

  2. There is no such thing as a closed problem, only a closed way of teaching.

    I can do the “guess the number” problem in an open way. I can take any exercise and make it open. It is all about how you interact with the student.

  3. It also depends on what the goal of the challenge is. (Sounds like you are making an assumption that testing challenges are only about peer feedback, or student growth.)

    During an interview my goal isn’t always to find out if they can learn well. Sometimes I want to know what they already know, so I can assess where I will need to train or supplement them.

    If the goal of the challenge is to find out if the person gets easily frustrated, a closed challenge isn’t a bad thing. I personally find closed challenges a quick way of finding out fast what makes certain people upset and on what topics or styles of thinking.

    1. THanks for the reply, Isaac.

      In a job interview, I am interested in how the applicant will solve a problem I present to him. Like you, I am interested in what he already knows, and how he plans to fill the gaps in his knowledge. Both are valuable to me if I can see how he solves what I call an open problem.

Leave a Reply

Your email address will not be published. Required fields are marked *