Tracking testing on the Scrum taskboard

Today, a colleague of mine, Norbert Hölsken, started off a discussion in our internal communication channel. He asked:

How do you treat bugs on the taskboard that are found during testing? Create a new test for each bug, and put the test task back in ToDo? Or create a bug, and a bug-follow-up testing task?

As it turns out there are a lot of valid reasons to do it one way or another. Yet, the answer “it depends” does not help – neither a Scrum Coach, nor a tester working in a Scrum environment. So, I started raising some of my experiences and concerns, and some of my other colleagues replied as well.

Skip forward three hours, and I am writing a blog entry on my thoughts about it.

Test tasks

From my experience test tasks are completely different to development tasks in Scrum. There is one itchy issue with creating test tasks: It does not feel right. Usually you have one, maybe two testers (or test-infected developers) on your team in the most Scrum teams I found. They will do “all the testing” for your definition of “all the testing”. Once you create “test” tasks, you will find yourself with dedicated tasks that all of the other team members assume a tester will pull from the taskboard.

Why is this a bad idea? First of all, this task has an implicit dependency. Not only on the tester(s) being not hit by a truck, but that all the coding tasks are fulfilled. Of course, in such a setting, the question from my colleague is very valid. If the testing task finds problems, probably all the work has to go back, starting-over. Enter the mini-Waterfall where we do analysis, design, coding, and testing within a Sprint, but nothing else in our culture and paradigms changed.

From my point of view it shows a certain drawback. We have implicit dependencies, and on a dysfunctional enough team this setting will create a silo thinking mentality, probably setting up the whole coding team to fight against the one or two testers that are creating more work just before the sprint review. Ouch. I think this creates enough tension on a team that starts using Scrum that I would not recommend it to start with. However, if the team over time while getting more experience finds out, this is the right thing to do, I will be suspicious initially, but might get convinced over time.

On the other hand, Ilja Preuss called out that testing tasks might work very well for automated acceptance tests. I haven’t seen teams doing this, and I think it would lead to sub-optimization, but I also think it can work. I prefer to put “acceptance criteria are automated” on the team’s definition of done, and have this condition implicitly on the taskboard spread out over all the other tasks that I will have to do as a team.

Dedicated Testing column

First of all, Scrum says nothing about the organization of the team within a sprint regarding their sprint backlog. However, most teams prefer to use a taskboard as an information radiator, so that everyone on the team knows exactly what is happening, where they are at, and how they are doing regarding their sprint goal. There are several different ways to organize your taskboard. The most common understanding is that you need three columns: ToDo, Doing, and Done. I don’t agree with this.

When my colleague raised the above question, I thought immediately, “how come they don’t have a separate Testing column?” I was thinking about a team I consulted with for the past 9 months (or so). They have the following columns:

  • Story
  • ToDo
  • Coding
  • Review
  • Test
  • Done

They move the story card on the taskboard to done once the product owner reviewed the story in the running system during the sprint. Then they have multiple tasks which spread the taskboard. The review column is allowed to be skipped only if people did work in pairs on that particular task. This encourages pair programming, and I found it a very elegant way to support pair programming. The remaining columns should come natural.

In the testing column the only tester on the team collects tasks which have been implemented up until a certain when he feels comfortable to test these tasks together as a collection. I like to call this the MTSI, minimal testable story increment. At times the tester tests parts of a story once he sees that a breakthrough is available that he can test in a manual kind of way. At other times, he might wait up as late until all the other tasks have been completed. Of course, this approach has implications regarding how the team defines tasks, and this particular team I have worked with does a lot of things in a way that helps the programmers, the tester, the product owner, basically all people involved. (They are hiring. If interested drop me a line. :) )

Ilja pointed out that a separate testing column probably means to have manual tests involved there. I don’t quite agree. I would also use testing tasks based upon my Exploratory Testing charters if the context asks me to do this. I would also set up a low-tech testing dashboard as a second information radiator, but I would keep this idea for some time to myself in order not to overburden the change effort. I think there are many valid reasons why I would indeed start with “put all the manuel testing tasks in the testing column”.

A meshup

As Cicero wrote in De re publica, I see advantages in the first approach, and I see advantages in the second approach. But if you ask me to take a position, I would prefer a mixture of both worlds.

If you start with a lot of testers coming from a manual only background, you might want to start with a separate column. However, over time I would work hard with the team to be able to get rid of this particular column, and bring more and more testing related tasks on the taskboard. At least I would make a possible bottleneck observable by using the testing column as a sort of gateway column. If too much work gets piled up there, we certainly have something to talk about during the retrospective.

On the other hand, when your testers come from a automation heavy background, then they might find it more comfortable to work with separate tasks. If I find testers working on their own silo tasks, I would work hard with the team to be able to exchange more knowledge here – and how to reduce the amount of bugs that get into the system in first place.

It might also be the case, that a lot of testing related tasks have to be carried out by programmers. Then I would start with tracking these as separate tasks first, sooner or later make them part of the definition of done, so that everyone takes responsibility for their own work.

Ideally, I would like to balance all three of these approaches against each other. This is not possible under all circumstances, however, but I think we can work on this as adults. If not, we might have a different problem to solve.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

One thought on “Tracking testing on the Scrum taskboard”

  1. Hi,

    we have set up a scrum team and just started our first sprint. The team is composed of a PO, a SM, 3 developers and a tester. We’ve added a “Q&A” column on the task board and I felt there could be a problem with it. Because tasks in this column was handled only by the tester, it could become a bottleneck. To address it, we’ve added a WIP limit on the test column and we encourage developers to make tests on their own when the WIP is reached. Nevertheless developers usually are bad at test, so I wonder if it is really a good idea. The future will tell us !

Leave a Reply to Benoît Cancel reply

Your email address will not be published. Required fields are marked *