Collaborative Test Chartering

Inspired from the EuroSTAR 2010 Exploratory Test Management roundtable I had an idea, which I would like to play with a little. Since it doesn’t seem as if I will get to it too soon, I decided to put it up on my blog, and maybe get some feedback from peers and early adopters who are eager to play around with the idea, and can provide me with feedback. Now, the idea is to collaboratively come up with test charters for all the Exploratory Testing sessions on your project.

Exploring a product for problems is one essential part of my understanding of “done” on any project. I love to do much test automation, but I also set time aside for exploratory testing. With Agile methodologies focusing on the team and on collaboration, I think there should be a way to plan your test sessions in the team. But how can we reach a common understanding of the charters for test sessions within a team? – whatever the used methodology may be.

At the roundtable someone (sorry, I forgot the name, but I think it was Henrik Andersson) explained that his team was debriefing and chartering their sessions together. During the debrief the team often came up with new charters for new sessions. Having everyone else on your team knowing what others did is a great thing.

But how do you charter together? That’s where my idea comes in. I would like to try out an approach similar to the way Agile teams estimate their work. The idea is that after getting a common understanding of the application (i.e. through a collaborative inspectional testing session, or through a first debrief, or through an already existing understanding of the application, or…) the team should have a good basis in order to come up with test charters.

The team gets together in a brainstorming session, where they will identify charters. They can use any aids for this. For example they could start to create a story map of the application, and see how each of the mapped items should be considered for a testing charter. The team might dive into the Product Backlog, in order to get an understanding of the application, and how to address risks. Or the team might use a risk analysis in order to define charters. In general the charters should not be constraint by any aid they use. It’s more important to come up with as many ideas as possible for charters. You can come up with short test sessions of 90 minutes, or come up with test threads consisting of multiple sessions. Since this brainstorming might take more time than worthwhile, you might want to timebox the brainstorming.

In a second step the team takes a look on their brainstorming results. The task then is to assign bring the charters in a priority for your project. The target is to have a basic understanding of the priorities of all the charters that you brainstormed. The charters are intended to be worked on in this priority order during the next few days. Don’t try to make this priority too perfect. You are going to change the priorities based on the debriefings in the next few days anyways. Depending on your context, you may end up re-prioritizing a lot, or just seldom. In either case you will have a better knowledge about your variations already before starting this. Try to come up with a good-enough priority list of the test charters for the next one or two days at least.

That’s it for the first chartering. This shouldn’t take too long. You should restrict it to at most one hour of time. I am also unsure whether coming up with durations for your test charters should be done in a separate step. Also, when and how you start to break down test threads into charters is a thing I would have to play around with at the first project that I might want to try this out.

Now, while working through your test charters in priority, you will get new knowledge. Since humans are not that good in guessing the future, you will find new priorities and changing priorities during the next few debriefings. Spend some time (i.e. five minutes) to reorganize your open test charters after the debrief, and come up with new charters, and prioritize them – similar to the way you did the initial test charter backlog. Over time you will get knowledge about your project, and adapt to changing risks.

That’s it. I wrote this idea (not in this extent, though) to Chris McMahon as a proposal for the second Writing-About-Testing conference in May, but he understood it as a proposal for test estimation. Though you might use your charter backlog as an estimation technique as well (maybe I will write about this at some future point), it’s primarily benefit comes from the test planning activity.

So, if you dare to try this idea out before I can, drop me a comment and tell me what worked for you, and what didn’t.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

12 thoughts on “Collaborative Test Chartering”

  1. Would this be applicable to new products or maintenance products? I am trying to see if I can try this on my existing project. If we do charters would it be for new features on existing product?

    1. It doesn’t matter. Since you use test charters for your upcoming test sessions, you can test new products as well as maintenance products. Basically, anything you like to test is put into a test charter. For an existing product this might be a new feature (explore functionality, explore side-effects to related areas, explore performance, …) or it might a bug you got from your customer (test around this bug). Whenever you have a certain amount of test charters up for your team, you can sit down, and identify such a basic charter backlog, and later work on it.

  2. Hi Markus,

    This is similar to something we are planning on trying out next release (Feb 2011). Initially we gathered thoughts from Simon Morley’s comment about Systems Play, Michael Bolton’s Rapid Reporting & Martin Jansson’s idea about Test Proposals.

    Simon Morley’s Thoughts on test framing also touches on aspects of this as well. Something that got me excited about the idea of collaborating with him on a process from initial planning/collaboration combined with traditional reporting aspects with a lean twist to them. You’re very welcome to participate in those discussions if you’d like? Just drop me an email.

    I’ll blog about our collaboration low tech mind map sessions when we try them out next month & let you know how useful they have been for us.

    Good thoughts & thanks for sharing.

    Cheers,

    Darren.

  3. I like this, Markus. We have some “tester games” coming up at the next team meeting. I may see if I can incorporate some of these ideas.

  4. Markus,
    Very interesting concept and will try it out in an upcoming release.

    For each charter, do you typically include scope (with ‘good enough’ elaboration) and also identify the person responsibile or owning that charter?

    1. I would apply a pull principle for test charters: Whoever is the next to go, can go with any charter. Ideally you combine this with paired testing, so that anyone is able to test anything as he sees fit.

  5. I agree – the sum of group ideas is more than the sum of each of it’s individuals ideas.

    The 1st thought that came to my mind reading this post was – “OK, so what’s the difference between that and a Review ?”.
    And the difference as I see it – in a Review, we normally give feedback for some predefined artifact, in brainstorming we start from scratch.
    Personally, my experience shows that discussions without any common outlined basis are much less productive.
    So I would suggest:
    One team member writes an outline – The Charters he thinks of, and their priority (~30min).
    The rest/part of the team gathers up for a short Review (sorry for using old ST terms – still that does not mean these are wrong :-) ) – additional ideas are recorded, and priorities fixed where needed (~30min).
    * My past experience shows that 2-3 team leaders or experienced testers can raise ~30% additional ideas in such a session (each gives his own favorite heuristics and specific knowledge of other related parts of the product/arena).
    I’d say that in that format, we can save up to ~70% of the logging work and trivial ideas which would have only slowed down the original brainstorming meeting described in the blog post.

    While people normally say “a picture is worth a 1,000 words”, few years ago, I started saying “But a table is worth a 1,000,000 words”.
    Which means – placing these ideas in a tabular format, with inter-dimensional relationship (like what should be covered on IE, Chrom, FF…) as well as the trivial column for priority, and maybe even stage (Sanity/Regression) – can ease up the process of focusing and prioritizing, while making sure the presented material is light weight enough for brainstorming.

    Kobi Halperin

    1. Nice addition. I like that idea, but I also see a risk in the forces around this. The idea for the brainstorming has the advantage that the team owns the charters, rather than just the team member who wrote the outline. For a functional team the pre-outlining might work better than for a team with problems accepting other view points – that’s the Not-Invented-Here problem. That’s what I would play around with, at least.

  6. Hi Markus

    I like your idea about using brainstorming etc to come up with more charters and to involve more of the team to help with the test planning.

    I wrote something similar to this awhile back about hi-jacking scrums (http://steveo1967.blogspot.com/2010/02/exploratory-testing-and-scrums.html)

    From my own experiences of this I find that more of the team understand what is happening in test and instead of being isolated from the testing process they become part of it.

    Great idea.

  7. Hi Markus,

    I’m going to talk to my Project Manager when I meet with him in a week’s time and see if we can trial this on our next sprint. Even if I cannot get it onto an official project here I would be interested in thinking the idea through more with you.

    Regards,

    Stephen

Leave a Reply to shilpa Cancel reply

Your email address will not be published. Required fields are marked *