Recently I sat down and asked our bug database about my last four years of being a software tester. Here are some statistics I found in it:
This makes 91 bugs per year, or 1.75 per week. 68.96% of the bugs I opened got fixed, 9.07& are invalid, 7.42% are duplicated, 4.4% will not be fixed, 3.85% could not be reproduced, while nearly 5% of the bugs I opened are still either new, someone works at them or they needed to be reopened.
What does this tell you about me as a tester? Am I a good tester? A bad one? A mediocre? Were my bug reports always clear? Did they motivate the responsible developer to fix them? How did the bad reports distribute over the years?
Now, these are the more interesting questions to ask in order to make any sense on whether I am a good tester or not. Mere bug counts or percentage values do not reveal anything about this. So, rather than managing by the numbers, maybe manage by working with the individuals. The famous paper on software engineering metrics from Kaner and Bond has more on this.
3 thoughts on “Inventory-taking”
As the saying goes, there are lies, lies and dammed statistics.
The thing is, numbers with a title do not a story make. Now if the stats read:
(From a dev perspective)
16 New bugs which we will get around to on Wednesday … some Wednesday
5 Assigned defects that we intend to fix cos the tester made a compelling case for us to do so pr the PM said we had to (grumble grumble)
3 reopened bugs (oops, our bad)
251 Fixed. Go Dev, go dev, go dev …. oh and thanks to testing for finding them
33 invalid (your bad, boy)
That would be a story :)
In the way you’ve presented the stats it implies you haven’t taken a day off in 4 years. Maybe that shows that you’re a super-tester!
You see – I’ve just made an interpretation from your figures that you apparently didn’t intend to present.
That’s the danger with stats – if the story going along with them isn’t crystal-clear – usually more “klister-clear” as I joke in Swenglish (as clear as glue!) – then it’s going to be open to interpretation!
If that’s what the sender & receiver of the info wants, then fine. But I don’t think it is.
What is funny to me is that these tables and figures are so familiar to me. Somehow they seem to provide valuable information. I have been asked in the past and will be in the nearby future to deliver such fancy table with numbers.
Somehow we believe that tables with statuses and numbers deliver information. Somehow that information gains more contexts when presenting them in charts (actually there are people believing in this kind of “truth”)
I think you provided a good example and adding additional information (document from Cem Kaner and Walter Bond) to make up our mind.
Assume that your table is familiar to me. It is so familiar that it could be my own. Actually why not use your table and present that one. Perhaps I would get some complaints because there are unknown statuses in the table the management is not yet used to it. It might have to explain what the value is of “WorksForme” means. Assume that I would be able to provide them a acceptable explanation about that status. The might actually accept your table.
The same is with using table. If there are not questions to answer certain tables/metrics are useless. I attended once a presentation by Markus Schumacher telling “Metrics should change behaviour”. To me there is a relationship between this: there must be a question which needs an answer. If that question should be supported by metrics, then we have to provide them. The metric itself cannot be the answer as it is missing interpretation. It can be supportive. The support is only right if it change the behaviour for the recipient.
Would you think we should test our metrics? Would that be another challenge? Perhaps asking which test cases could be derived from “your table”? Would the number of test case(s)- suggestions also be a proof of the value of those metrics? :-)