A proof and a challenge

Here is a mathematical proof I came over during my eleventh grade in the year 1996.

Let x be the biggest real number below 1:

    x < 1
=> 1-x > 0
=> 1/(1-x) > 0
=> 2/(1-x) > 0
=> 0 < (1-x)/2
=> x < x + (1-x)/2

since x is the biggest real number below 1, we may conclude

(x < ) 1 <= x + (1-x)/2
(between x and 1 there is no real number)
=> 1-x < = (1-x)/2
=> 1/(1-x) >= 2/(1-x)
=> 1 >= 2
=> 1 > 2 or 1 = 2
q.e.d.

Since 1 is surely not any larger than 2 we may follow that 1 must be equal to 2. That said, the whole mathematics need to be renewed. Since 1+1=2 we can now follow that 2+2=1, since both are interchangeable. Therefore we just have to deal with ones for all positive numbers. Everything is equal to one, just as I proofed.

Now, you can either stick to believe me on this proof and help me in re-defining the mathematics field, or you can challenge the proof that I just showed you. So, either you have to forget everything you learned about calculations in the past 20 (or so) years, or you can start to investigate the flaws of this proof.

It’s the same with other, not so obvious things in our lives. You can either start to believe the evidence you get presented, or start to challenge it, or to challenge it, or maybe to challenge it, or just to challenge it, or even stick with challenging it. Take your pick.

Inventory-taking

Recently I sat down and asked our bug database about my last four years of being a software tester. Here are some statistics I found in it:

Bug counts

Bug stateCount
New12
Assigned5
Reopened3
Fixed251
Invalid33
Duplicate27
WorksForMe14
Reminder1
Won’t Fix16
Later2
Sum364

This makes 91 bugs per year, or 1.75 per week. 68.96% of the bugs I opened got fixed, 9.07& are invalid, 7.42% are duplicated, 4.4% will not be fixed, 3.85% could not be reproduced, while nearly 5% of the bugs I opened are still either new, someone works at them or they needed to be reopened.

What does this tell you about me as a tester? Am I a good tester? A bad one? A mediocre? Were my bug reports always clear? Did they motivate the responsible developer to fix them? How did the bad reports distribute over the years?

Now, these are the more interesting questions to ask in order to make any sense on whether I am a good tester or not. Mere bug counts or percentage values do not reveal anything about this. So, rather than managing by the numbers, maybe manage by working with the individuals. The famous paper on software engineering metrics from Kaner and Bond has more on this.