Phil Kirkham suggested in the comments on my entry about Testing and Management mistakes to do a write-up on how Tim could have responded in a better fashion. Personally, I decided to show to fashion of this: the incongruent style and the congruent style. Today I will begin with the incongruent one, providing a more congruent view in a later blog entry. Just in case you haven’t read the source of inspiration from Michael Bolton, yet, you can read it here.
Last week Jurgen Apello put up a list of ten questions to ask your possibly new manager during a job interview. Since I found his list of compelling, I was immediately tempted to answer the ten questions. Though I’m not considering myself a manager, here is my take at it.
This is a response to a recent blog entry from Michael Bolton. He challenges to spot as many management mistakes as possible in a short dialog excerpt between a project manager and a tester. I will quote the dialog and walk through each of the problems I can spot. Though asked differently, I will not only mention those problems related to management, but also to the tester there.
Magnus the Project Manager: “Hey, Tim. Listen… I’m sorry to give you only two days notice, but we’ll be needing you to come in on Saturday again this week.”
Tim the Tester: “Really? Again?”
The manager does not give the reason for this action. He simply states there is the need for the tester. This sets up emotional pressure on the tester, who does not want to let the team fall. I’m not sure if the manager put this up by intention or just by coincidence.
Tim on the other does not start to challenge the claim. The project manager simply says they need him, especially. But maybe another tester could do the job. What’s his mission? How long should he test? To whom should he report his results? Who is the client? Instead, he simply asks from the personal perspective, which might indicate an emotional reaction to the claim from the project manager. So, let’s look how the manager responds to this.
Magnus: “Yes. The programmers upstairs sent me an email just now. They said that at the end of the day tomorrow, they’re going to give us another build to replace the one they gave us on Tuesday. They say they’ve fixed another six showstoppers, eight priority one bugs, and five severity twos since then, and they say that there’ll be another seven fixes by tomorrow. That’s pretty encouraging—27 fixes in three days. That’s nine per day, you know. They haven’t had that kind of fix rate for weeks now. All three of them must have been working pretty hard.”
The build is a replacement for a broken build. The programmers seem to have produced some lousy work the last time. Now, that’s interesting information. Despite this valueable information, the manager seems to rely on the facts as counted by the numbers. There seems to be some corporate metrics program, that let’s this manager look more particularly on the number of bugs fixed than how many further bugs could have been introduced by fixing those. Again, this information is available in any bug database, but the manager simply ignores this valueable information. Last, but not least, his interpretation in the last sentence indicates that “more bugs fixed” means “harder worked”. This mustn’t be the case. Another point that Jerry Weinberg made me aware of, is the fact, that there is nothing stated about how many incidents correlate to the mentioned bugs. There could have been 10 bugs fixed by a single code line change. Similarly there could be a single bug which needs fixing in several components. The numbers alone lie here. A final point: The manager should know if his programmers have worked hard , if he would have paid attention to what they were doing rather than looking at the numbers, only.
Tim: “They must have. Have they done any testing on those fixes themselves?”
Tim clearly should know better here than to give in to the manager. The developers mustn’t have worked very hard as pointed out before. Tim could know this better, if he paid attention to the past of the project. Just by stating that the only outcome here is that the developers were working hard, he proves his manager correct – in the manager’s eyes. If you believe something hard enough, your mind makes it a fact. This is inattentional blindness.
Magnus: “Of course not. Well, at least, I don’t know. The build process is really unstable. It’s crashing all the time. Between that and all the bugs they’ve had to fix, I don’t imagine they have time for testing. Besides, that’s what we pay you for. You’re quality assurance, aren’t you? It’s your responsibility to make sure that they deliver a quality product.”
Of course the developers did not test, how could they deliver that many fixes otherwise? There are several pointers that this managers ignores relevant information. He doesn’t know if they tested their code, the build prociess is unstable, but that doesn’t give him a clue, either. In combination with the Assurance Fallacy of Testing (“You’re quality assurance, … It’s your responsibility…”) there really is more need to fix bugs, than to fix them properly. Of course, the build process crashing all the time could indicate a problem in the quality of the code these programmers seem to produce.
Tim: “Well, I can test the product, but I don’t know how to assure the quality of their code.”
When someone is arguing irrationally, don’t try to argue with logic. “I know that you have many duties as a project manager, but doesn’t the broken build process point out, that there might something wrong?” could be a response to get the attention of the manager back to quality rather than delivery. The hidden message Tim wants to send here just doesn’t work in this context.
Magnus: “Of course you do. You’re the expert on this stuff, aren’t you?”
Without knowing this particular project manager for sure, I sense that he is basically just trying to sell the idea on testing to this poor tester. For any reasonable tester it’s obvious that he does not know anything about testing at this point. Instead, by giving kudos (“expert”), he tries to let the tester feel comfortable. This is the “washing machine selling management”. In this style management acts as if they were selling you a washing machine. Here, the selling involves the tester to be the right person to do overtime on the weekend. Anyone who thought about testing at least a tiny bit, should get to the conclusion, that testers don’t assure any code quality. But as pointed out, the manager simply wants to get this tester to work over the weekend by sacrificing even his own credibility.
Tim: “Maybe we could arrange to have some of the testing group go upstairs to work more closely with the programmers. You know, set up test environments, generate data, set up some automated scripts—smoke tests to check that the installation…”
This is a brilliant idea. Let testers and programmers work together to fix the team problem, help programmers get started with unit testing and the like. Too bad his claim is interrupted here by the manager, thereby undermining the testers management capabilities. Of course, the tester shouldn’t make suggestions on how to manage the project here.
Magnus: “We can’t do that. You have high-level testing to do, and they have to get their fixes done. I don’t want you to bother them; it’s better to leave them alone. You can test the new build down here on Saturday.”
At this point, the manager fires back by telling the tester how to test. Personally, I think this is a response to the tester’s attempt to manage the project. Of course, the fallacy of testers interrupting and interfering with the development is inherent here. Testers help developers get going. If the priority is to fix the bugs, rather than to fix them properly, then it’s more important to get some software. If the software does not need to work, I can hit any other requirement (The Zeroth Law of Software taken from Weinberg’s Quality Software Management Volume 2).
Tim: (pauses) “I’m not sure I’m available on Sa…”
Magnus: “Why not? Listen, with only two weeks to go, the entire project depends on you getting the testing finished. You know as well as I do that every code drop we’ve got from them so far has had lots of problems. I mean, you’re the one who found them, aren’t you? So we’re going to need a full regression suite done on every build from now until the 13th. That’s only two weeks. There’s no time to waste. And we don’t want a high defect escape ratio like we had on the last project, so I want you to make sure that you run all the test cases and make sure that each one is passing before we ship.”
Noticed that this manager does not even let the tester finish? Again? This indicates that the manager does not have any interest at all to get to know what the opinion from his tester is. It’s more important to have him working on Saturday. Opinions do not count. In addition. he puts more pressure on the tester here. The schedule is of course pressuring this manager, but it’s his job to manage to get a quality release.
In addition his claims in this response are not congruent to his earlier message. He cares about quality from the tester, but not about the quality from this who arrange quality in the code itself. This is ridiculous and I really do not want to be in Tim’s situation at this point. The manager again tells the highly-educated tester what he has to do. Mary Poppendieck explained to me that this is really a problem.
Once again the manager is ignoring the facts. Every “code drop” has had problems in the past. So, where are the code reviews? Where are the unit tests and walkthroughs? Where are the design reviews? Given the information the manager seems to have, he could really do better.
Tim: “Actually, that’s something I’ve been meaning to bring up. I’ve been a little concerned that the test cases aren’t covering some important things that might represent risk to the project.”
This response again is good, but he could do better. Telling the manager that the problem indeed might be in the way he manages the project with the information at hand, is ridiculous. The Candidate Product Rule says, “Actually, it ain’t nothin’ ’til it’s reviewed” (Weinberg, QSM Vol. 2, page 290). This manager seems to manage by the rule, that it’s nothing until it’s tested, but we can skip review easily. The claim Tim makes here is still valid, but he doesn’t respond to the things Magnus just raised. He could bring his point out more clearly by doing so.
Magnus: “That might be true, but like I said, we don’t have time. We’re already way over the time we estimated for the test phase. If we stop now to write a bunch of new test scripts, we’ll be even more behind schedule. We’re just going to have to go with the ones we’ve got.”
Ah, the common time creation myth. Time is not created, you have to take your time. Especially for quality products there is no free lunch. You have to plan accordingly, incorporating the time it takes. The problem that the project is already over estimate, might as well depend on the manager of the project. And the manager is falling for the test phase fallacy. Testing is not a phase, bugfixing might be. Last, not adapting to the project as it unfolds by creating new test cases may as well produce the problems of tomorrow. This vicious cycle the Magnus manages to get in here is very common.
Tim: “I was thinking that maybe we should set aside a few sessions where we didn’t follow the scripts to the letter, so we can look for unexpected problems.”
Magnus: “Are you kidding? Without scripts, how are we going to maintain requirements traceability? Plus, we decided at the beginning of the project that the test cases we’ve got would be our acceptance test suite, and if we add new ones now, the programmers will just get upset. I’ve told them to do that Agile stuff, and that means they should be self-organizing. It would be unfair to them if we sprang new test cases on them, and if we find new problems, they won’t have time to fix them. (pause) You’re on about that exploratory stuff again, aren’t you? Well, that’s a luxury that we can’t afford right now.”
Well, the claim from Tim to do some exploratory testing to fill the gaps for the uncovered areas is great. Magnus’ response to it is fatal to the project. Again the project manager does not adapt to the situation at hand, after the project started to unfold itself. By getting risk-averse coverage of those areas which are not covered by the initial test cases, the manager sets himself up for tomorrow’s problems. Oh, and of course, it’s better if the customer finds the bugs, rather than have the programmer fix them now before the loss of reputation. Last, but not least, exploration isn’t a luxury, neither is testing itself. They’re both a necessity for quality software. Skipping either or both is not only undermining testers’ efforts, but also comprising developers who very well observe what managers do and don’t do. Finally, the manager has misunderstood Agile in the sense to develop software without the weight of documentation, but has not understood that Agile development needs a team. By separating the testers from the developers the manager undermines every benefit he could get from really doing Agile development.
Tim: (pauses) “I’m not sure I’m available on Sa…”
Magnus: “You keep saying that. You’ve said that every week for the last eight weeks, and yet you’ve still managed to come in. It’s not like this should be a surprise. The CFO said we had to ship by the end of the quarter, Sales wanted all these features for the fall, Andy wanted that API put in for that thing he’s working on, and Support wanted everything fixed from the last version—now that one was a disaster; bad luck, mostly. Anyway. You’ve known right from the beginning that the schedule was really tight; that’s what we’ve been saying since day one. Everybody agreed that failure wasn’t an option, so we’d need maximum commitment from everyone all the way. Listen, Tim, you’re basically a good guy, but quite frankly, I’m a little dismayed by your negative attitude. That’s exactly the sort of stuff that brings everybody down. This is supposed to be a can-do organization.”
If it’s natural for Tim to come back all the weekends, he will be asked again. Of course he shows his struggle during this conversation, but it’s really for the manager to influence to get what he particularly wants him to do. Again, in the example stated, the manager ignores very well information which is available to him beside the information he gets from the testers. The pointer to “bad luck” just reminds me on Weinberg, who stated to replace the word “luck” in the phrase with “management”, when a managers refers to bad luck. We might as well prey to the software gods, if a whole company can be built upon management by luck. Of course, the manager’s claims about commitment and can-do organization are just institutionalized to make his argument and get Tim motivated. The problem lies in the decrease of motivation after this conversation. Tim’s trust for proper management has surely decreased, and a few more of these instances may let Tim leave the company searching for a better job opportunity.
Tim: “Okay. I’ll come in.”
Finally, Tim gives up. Never do this, though it might mean to continue fighting for ages or to lose the job. This is ok, as you didn’t like the job, anyways, now do you?
That’s my more elaborate response to Bolton’s challenge. Now, which parts did I miss?
Today, I started off a series of documentation fallacies. Gerald M. Weinberg inspired this series. Since I didn’t have the initial inspiration quote handy at the time, I looked it up right now. As it is the initial one, I call it the documentation fallacy #0. It’s a citation from the reminders in Perfect Software… and other illusions about testing.
The Documentation Fallacy #0:
Believing that the mere existence of documents has some value.
Just because there is a document, means nothing. The document might just simply plainly lie around and start to get dusty. When no one is reading your document, then you have basically wasted all the time that you spent on preparing to write it, write it, review it, publish it. Period. Well, maybe you learned something as a side-effect, but you could have learned that one without writing a document that no one reads in first place. Ok, maybe you could have learned something by writing your thoughts down on the learning topic, sure, but still, the writing isn’t worth the paper that it is going to full. Save the rain-forest, remember?
So, just writing a document means nothing. It could be of no value because it has low quality, it could be of no value as it describes the life of Alfred E. Neumann, or it could be of no value as it describes how to the 8086 CPU was invented. If your project is not about any of these, you will simply not care about the content. More dramatically, your document could distract the people on your project and mislead. In the examples before I exaggerated to make a point, but what if the document was plainly wrong, and people started to work after it? Maybe you will end up with a project way behind schedule when you find out the problem right after delivery of the system. Doesn’t sound good, does it?
The Documentation Fallacy #1:
Thinking that more documentation means more thorough.
More documentation doesn’t mean anything. Personally, I find great value in those documents, that are as crisp as possible and point me out where to find the details if I need them. Usually I started to dig into a topic by getting a rough overview. After that I start to dig deeper and search for more details. This is the reason why to write requirements in a rough form before the work on the implementation begins. This concept can be found in use cases. Goals build the high level overview. There are different goal levels, and for more complicated business flows you write down an overview use case description alongside with finer-grained use cases on the subsystem or function level as you may see fit. Similarly when working on a user story, you start with a high-level overview and then start to dig deeper.
So, starting with a crisp overview is great. But when digging deeper you don’t need to document everything. The trivial cases are mostly irrelevant to document, because everyone already knows what shall be done in the login use case for example. Well, maybe you have a more complicated login use case, then document it, but if it, refuse to document the very obvious. Simplicity – the art of maximizing work not done – is the key here. But this does not mean that you shouldn’t document anything at all, of course. That would be stupid.
The Documentation Fallacy #2:
Thinking that since it’s documented, it gets read.
Just because it is written down somewhere, does not mean that anyone will read it. Maybe you’re lucky enough that someone reads it, but you can’t rely on it. Similarly you cannot rely on the fact that the people who actually need to read the document, have read it. Most of the time, documents simply lie around with no on reading them, covering dust – even virtually on the hard-drive of your computer. In the waterfall model we introduce the biggest problem here, since the documents don’t necessarily get read. I remember a case where we run into a problem during the user acceptance testing resulting from customer stakeholders not having read (properly?) the documents that we had sent them. Obviously this was the wrong time to find out that the software didn’t realize the functionality properly.
The Documentation Fallacy #3:
Thinking that since it gets read, it gets understood properly.
The Satir Interaction Model taught me, that not necessarily everything that gets read is understood in the same way for all readers of the document. The model splits up communication into four phases: intake, meaning, relevance, and response. During the intake each person takes in some parts of the message that she just heard. Therefore it gets distorted during the reception already. The meaning phase then interprets the message based on the personal rule model and previous knowledge of the receiver. This might distort the message even further. Overall, this may result in necessary details left out, or unnecessary details added to that message.
That said, it’s obvious that everyone may understand something different when reading a particular document. As the message and the meaning are distorted from what the receiver takes in (partially) and what she adds to the message based on previous experience, the understanding may well end up as being something completely different. So, just barely reading a document does not mean that the author’s intentions were understood.
The Documentation Fallacy #4:
Thinking that since it gets read, it actually gets used downstream.
Thanks for Paul Boos, who added this to my initial list. The two parts from the Satir Interaction Model left out of the discussion so far are the relevance and the response parts. On the relevance I assign to the distorted message how important I think that interpreted message is to me. If it is irrelevant, I might end up ignoring the message overall, thinking “fine, so what?”, or “don’t bother me.” I might also end up giving the relevance to completely ignore the message without responding to it. In the response I can decide how to react to me interpreted message, that I assigned some relevance. I can say something completely different, even out of context, like “nice weather outside, isn’t it?” This means that the mere existence of a document does not mean that I will follow it’s contents when reading it. It’s important what relevance I assign to the message and how I decide to respond to that message.
The Documentation Fallacy #5:
Thinking that since it is understood properly, it is used properly
Thanks for Jeroen Rosink for this addition. As derived from the Satir Interaction Model again, the response I take to my understanding of the document may vary to a great deal. This means that I might understand the document, but this does not necessarily mean that I will apply the message accordingly. In fact, most of the software I crossed was used unintentionally, in different ways, that the developers initially did not intent. The Windows scripting capabilities are one example of this coincidence, but there are many, many other examples out there. I’m sure you can name at least five on your own.
Please let me know of any additions I could make to that list. Hopefully, I raised some of your thoughts by now, so share your stories on misused or unread documentation with me.
Unconsciously I added the word beamer here to mean video projector (damn Germans). This might be an occasion to show up on the Nothing for ungood blog.
As I sat today in a meeting, I noticed what the problem with meetings is that are led and directly protocoled on the beamer at the same time. As I sat there, I was rather bored, and couldn’t follow the content. So, what is the problem?
Basically, you have two conscious processing modes in your brain. The one is mainly done in the left hemisphere and is called linear processing. In linear processing mode you follow up on language, writing, similarities. This processing mode is mostly used by technical workers. The other processing is the rich processing mode. This mode is asynchronous, as it searches all the things you learned long ago while doing it’s job. For example abstractions, and painting are done in the right hemisphere in this way.
Now, the problem is, that you can just use one or the other at times. When the linear thinking is in process, your right part of the brain can’t communicate over the shared communication channel. Andy Hunt introduces this model as a two CPU with a shared bus model in Pragmatic Thinking and Learning.
Now, let’s get back to our meeting. Most of the meetings I’m invited to are meetings to solve a problem. Now, problem-solving uses the right part of your brain the most. You need to seek past solutions, build upon them to solve the new problem at hand. When the beamer is turned on, and the protocol is directly typed into a word document, your left brain constantly interferes the problem-solving right brain part communication while taking a look on the letters in the document. With every typo your problem-solving abilities get distracted, and abandoned. Seriously, you’re hurting your progress more than you might be aware of!
Instead of constantly writing down for everyone the actions you considered, think about my proposal:
TURN THAT BEAMER OFF! NOW!
Sure, this needs some more thought for your meeting. This needs more preparation for it. But maybe exactly these unprepared meetings are the root cause of the problem at all. So, to reach highly productive meetings, simply leave the beamer off, or ask to work through the meeting without the beamer in case you’re not the meeting facilitator. Your meeting results and your colleagues might enjoy it. Oh, and just in case your participants are not prepared, postpone the meeting. You’ll be better off, anyways.
Recently I finally finished the fourth volume of Jerry Weinberg‘s Quality Software Management series. As a wrap-up I decided to write a summary of what I learned and why you should read it yourself, too, if you haven’t already.
In summary, the first book introduces into the cultural patterns that Jerry has found during his work with software companies. These cultural patterns include the Oblivious (Pattern 0), the Variable (Pattern 1), the Routine (Pattern 2), the Steering (Pattern 3), the Anticipating (Pattern 4) and finally the Congruent (Pattern 5) culture. He then digs deeper in the first three volumes on how to create a Steering culture. In the fourth volume he goes into the details to reach an anticipating culture.
Jerry provides a compelling view on the act of creating software, testing it and delivering it. Interestingly his view is right up-to-date, although he wrote the four books more than a decade ago. Next, I’m going to get into detail on each of the key lessons I took with me while reading through them.
Quality Software Management Volume 1 – Systems Thinking
In the first volume Jerry provides the cultural pattern model and relates it to the CMM levels and other approaches taken. Interestingly his view includes the culture around the people actually producing the software in first place, while other models focus on anything else but the people. First and foremost, Jerry defines quality as “value to some person”. Based on this definition, the question we have to ask in software development – which I define as programming, testing and delivering – is whose values counts most to us.
Additionally Jerry introduces the reader to systems thinking. That is a model to think through difficult and maybe complex situations in software projects. He shows many graphs applying systems thinking, thereby creating deep insights into each of the cultural models, as well as the dynamics around the creation and maintenance of software.
Using systems thinking Jerry introduces the non-linearity of aspects in software development. He revisits Brook’s Law and generalizes it to hold not only for late project, but for any project, when adding people, you’re making it more complicated, and maybe even later.
Jerry provides a compelling view on software error, distinguishes between faults and failures, and thereby provides a model for measuring, which is the topic of the second volume.
Quality Software Management Volume 2 – First-order measurement
In the second volume of the series, Jerry continues to dig deeper into the cultural model of software development organizations. Jerry presents a thorough discussion of the Satir Interaction Model. The Satir Interaction Model splits up communication into four major steps: intake, that is how I take in information; meaning, which is derived by making interpretations from the received message; significance, that is whether I start to bother or not; and finally the response on the initial message. All these steps happen in a fraction of a second in human communication.
The Satir Interaction Model is useful in situation where you start to make meaning too soon based on fractions of the data that you got. Often, this leads to mis-interpretations about the intentions of your communication counter-part and therefore not only to a poor dialogue, but also to a big misunderstanding. That’s why Jerry raises the Rule of Three:
If I can’t think of at least three different interpretations of what I received, I haven’t though enough about what it might mean.
In parallel to the discussion of the Satir Interaction Model, Weinberg continues to show the relevance for first-order measurements in management. Using systems thinking as introduced in the first volume of this series, he explains management actions and their results. He continues to dig into metrics that are useful for managing. Finally, he discusses zeroth-order measurements, that each management should be aware of. These measurements tell you on which things to keep an eye in order to manage your software project.
How to act upon these data is the topic of the third volume in this series.
Quality Software Management Volume 3 – Congruent action
In Congruent action Jerry introduces congruent communication. For any effective communication one has to respect the self position, the other position and the context about the communication. When leaving one or all out, you’re going to find yourself in an incongruent communication style. Leaving out the self-position leads to placating behavior, missing the other-position leads to blaming, leaving self and other out is a super-reasonable response, finally leaving all of them out is just an irrelevant coping style.
Additionally Jerry discusses the Myers-Briggs type indicator system. He introduces the four dimensions in that model. The first of the four letters says something on how I refresh my energy – introverts (I) for seeking self-reflection or extrovert (E) for interacting with people. The second letter describes how I take in information – either by facts from the sensation type (S) or by grasping the abstract concepts via intuition (N). The third letter says something how I make meaning – by pure logic in the thinking preference (T) or by grasping the feelings of the humans around me (F). The fourth and last letter states how I prefer to take actions – the judging (J) style prefers to settle decisions, while the perceiving (P) style prefers to leave options open. So, as I’m an INTJ, I prefer to take energy from being alone, intuitively grasp the abstract concepts, making meaning of them through logical thinking, and prefer to have things settled.
So, a congruent manager should consider their preferences and see the preferences from others using the Myers-Briggs type indicator model. Whenever I am aware that my message was not receipt correctly at my communication partner, I can take the choice to reflect and maybe present the information in a different format for my audience.
In addition Weinberg concludes in this third volume in the series how to reach a Steering culture built upon systems thinking, first-order measurements and congruent responses while leaving the transition to an anticipating culture for the fourth and last volume.
Quality Software Management Volume 4 – Anticipating change
In the fourth and last volume Jerry introduces the Satir Change Model. Since an anticipating culture relies heavily on change artists, this model comes in handy in forming such an organizational culture. Weinberg explains in great detail how people react upon foreign elements in the organization, and that too many threating external influences may result in people hiding in their basement. In addition he walks the reader along the change model and explains how to get from an old status quo to a new status quo and actually make and foster the necessary change.
Weinberg continues further how an anticipating organizations works through the overall software development process. Starting from meta-planning, tactical change planning, over to planning as an software engineer, this volume conducts how to make change stick in the organization. In addition he writes about process and process improvements. He lists processes commonly in practice by the time he wrote the book (1997), and discusses why it is important to know several process models as a change artist. Weinberg describes the differences between a process vision, the process model, and the process.
Finally, he gives a compelling view on things that are still relevant nowadays. Taking a closer look on how to terminate projects, and how to know that you should terminate or re-plan them, he gives a thorough overview of do’s and don’t’s in software development for the project manager. He discusses that requirements documents and design documents should be placed in a version control system alongside with the code. Considering that he wrote this book more than ten years ago, it struck me when I encounter multiple teams which are still not using any version control mechanism at all. Finally, Weinberg takes a look on tools, and how and when to introduce them.
The biggest gift in this volume are the lists of do’s and don’t’s in software projects alongside with the eleven commandments of Technology Transfer. His comparison of the waterfall model to other software development models may be seen as a bit outdated, but most of it is still relevant. Weinberg even touches the spiral model in this discussion, so portions of it also apply for modern Agile methodologies. Of course, the principle to know about many models and to know when they apply and should be used, is the key lesson to take away here.
Overall, Jerry provides many techniques and models, which come in handy at understanding the dynamics at play during a software project. Systems thinking helps you to analyze complex social systems in your project. The cultural model provides you with a useful view on how quality is seen in a particular organization, and which steps you might take for improvements. The Satir Interaction Model is useful in critical project situations, where your own measurement system is likely to be collapsed under the pressure of the situation.
Last, but not least, I’m glad that I got my copy from QSM Vol. 1 signed by Elisabeth Hendrickson. It was a coincident during last October, but she told me she felt honored to sign it for me.
Some while ago I started to write a tester’s novell. Heavily inspired by Elisabeth Hendrickson‘s article in the January 2010 issue of the Software Test and Performance magazine. Personally, I wanted to try myself on a tale of a tester after having read her article, and got in touch with Elisabeth. We exchanged some thoughts, some words, and voila, there was it.
I hoped it was that easy. Basically, what I wanted to tell, is the story about a new tester in a larger corporation. Based on my personal experience four years ago, I realized there was little previous knowledge that I could have taken with me for the job. Between the job interview and starting as a tester my father had died suffering from cancer for a little more than a decade. One day after the job interview I brought him to hospital. During family affairs I got the job position offering call. Having got this hard set-back, anyways, I started to dig into the field and learned. First by getting experience from colleagues, later by reading the classics and reflecting constantly on the practices I followed.
After having read a blog entry from Anne-Marie Charrett and one from Rob Lambert earlier, I decided to tell a tale about a tester getting introduced into our work. Alongside I want to spread the word on what we’re acutally doing, though there may be just few outside the testing field who will actually care. Therefore, I realized I needed to write it as an authentic, still fictive story, just like “The Craftsman” from Robert C. Martin.
So, I realized that I might split my work up into pieces. I decided to come up with three pieces so far from the original article. There is more room for further articles in the same manner. I’m still working on the story-line, so I would be glad to get some feedback on future episodes from the readers, which I might incorporate. The Software Testing Club put the first episode of the Deliberate Tester up on their blog. You can read it here, it’s called Session based exploration.
In yesterday’s European Weekend Testing session we had a discussion upcoming on whether or not to follow the given mission. The mission we gave was to generate test scenarios for a particular application. During the debriefing just one group of testers had fulfilled this mission on the letter generating a list of scenarios for testing later, the remaining two had deviated from the mission and tested the product, thereby providing meaningful feedback on the usefulness of the product itself. Jeroen Rosink already put up a blog entry on the session. He mentions that he define his own mission, and that it’s ok to do so during his spare time.
As mentioned I made my own mission also. I believe I am allowed to it because it is my free-time and I still kept to the original mission, define scenarios. for me the questions mentioned in the discussion were a bit the scenarios.
Of course, Jeroen is right, and he provided very valuable feedback in his bug reports. But what would I do at work when faced with such situation? Should I simply just test the already available application? Or should I do as I was asked? Well, it depends, of course. It depends heavily on the context, on the application, on the project manager, on the developers, on your particular skill level, maybe even on the weather conditions (nah, not really). This blog entry discusses some of these aspects I did not want to include generally in the chat yesterday.
One week earlier, Michael Bolton attended our session. He explained his course started with building a model of the application under test. Then he started to exercise the product based on that model thereby refining his own mental model of the application.
Jeroen had picked this approach. He also built a mental model of the application and went through the application with the model in order to refine it. On the other hand, Ajay Balamurugadas had build his model and translated that model into test scenarios completely.
To be clear here, both approaches are reasonable. Indeed, knowing when to use which is essential. The software engineering analogy teaches us, that we have to make decisions based on trade-offs. The trade-offs at play here are model building (thinking) as opposed to model refinement (doing). The more time I spent on thinking through the application in theory, the less time I have during a one hour session of weekend testing to actually test the application. On the other hand, the more time I spent on testing (and the more bugs I found by doing so), the less time I can spare to refine the model I initially made. Knowing the right trade-off between the two is context-dependent. I summarized this trade-off in the following graph.
In software testing there are more of these trade-offs. You can find most of them on page four of the Exploratory Testing Dynamics where exploratory testing polarities are listed. Thanks for Michael Bolton to point me out on this.
Basically, the main distinction between the two missions followed is the fact, that Ajay used all his imaginary information available to build the model for test scenarios, while Jeroen questioned the available product to provide him some feedback on the course. While, we may not have an application available to help us making informed decisions about our testing activities, the situation in the weekend testing session was constructed to have a product in place, which could be questioned.
Indeed, for software testing it is very vital to make informed decisions. The design documents, the requirements documents, the user documentation rarely satisfies our call for knowledge about the product. Interacting with the product therefore can reveal vital information about the product and its shape. Gathering information in order to refine your model of the application at hand. Of course, for very simple applications or for programs in areas where you are an expert, this may be unnecessary. Again, the contextual information around the project provides the necessary bits of information about which path to follow.
So, professionally Ajay and Jeroen did a great deal of testing. The key difference I would make at work is that I would inform my customer on the deviation from the mission. There might be legal issues, i.e. a call to follow a certain process for power plant testing for example, that asks to follow the approach on the letter. Negotiating the mission with the customer as well as proposing a different mission when your professional sense calls to do so, is essential for an outstanding software tester. Deviating from the original mission is fine with me, as long as you can deal with the Zeroth Law of Professionalism:
You should take responsibility for the outcome of every decision you make.
Since we’re more often than not the professionals when it comes to testing software, we need to inform our customer and our client about deviations to the original missions. We have the responsibility to explain our decisions and to make a clear statement that it’s unprofessional to deliver non-tested software, for example. Of course, they might overrule your decision, but by then they’re taking over the responsibility for the outcome themselves. Of course, just because you lost a fight, this does not mean, that you should give up to raise your point and lose the battle.
Last, but not least, I hope that the other participants, Vijay, Gunjan Sethi, Shruti, do not feel offended, since I just mentioned Jeroen and Ajay here. They also did a great job of testing the product, of course.
During a workshop at 8thLight Justin Martin collected assets of a software craftsperson. He published these on his apprenticeship blog, and I found the list compelling for some extensions. As I went through the list, I noticed that I included most of it in my article Software Testing Craft in the first issue of the Agile Record magazine.
Always brightening everyone’s mood with your love of what you do.
Deliberate practice and the enthusiasm to learn something new anywhere are at the heart of my craft. I seek learning opportunities for testing, such as testing challenges, Weekend Testing or Testing Dojos, as well as Coding Dojos, Coding Katas, and Refactoring challenges. Skimming code to seek improvements has become as vital to me as testing a new application for flaws that may bug me later. The life of a software professional is a daily learning opportunity, if you’re open for it.
Don’t be afraid to ask for help
Especially if you feel rushed, or have bitten off more than you can chew.
Everybody tries to be helpful, all the time. Indeed, Jerry Weinberg pointed out in Becoming a Technical Leader that being helpful is a key for becoming a motivational leader. That said, there are two sides of the medal: being helpful and asking for help. In a helpful environment it’s easier to ask for help. But don’t shrug in the non-helpful environments, either. Don’t be afraid about saying “I don’t know”, since it’s vital to know what you don’t know. If you do know what you don’t know, you also know when to ask for help.
Ask a lot of questions
Even if you are already a Craftsman, if you are surrounded by other Craftsman, there will always be new things to learn.
There is a direct connection to the previous point. Asking for help is a special case of asking lots of questions. By asking questions, you start to reflect over the course of what you perceived. Over time, when you get helpful answers on your questions, you are able to learn from the answers, and grow. This is called expertise or experience. By constantly questioning not only the technical facets, but also the process aspects of your work, you develop yourself as well as your mind.
Discipline Discipline Discipline
The mark of a Craftsman is to have an unfailing discipline to do the right thing. Whether that is constantly learning new things, or never forgetting what you know, especially when it is hardest to practice.
Discipline is a key to successful learning. Giving in to feelings of pressure or mistrust is the pathway down to suffering. You need discipline to stay on your course and pursue it. Leaving the safe ground means to go awry. Discipline itself can be separated into three basic components: going slow, doing it right, and being methodical.
Often it is better to take your time, making sure you do every step correctly, rather than rushing for a temporary spike in productivity.
Going faster by going slow, sounds weird, but it’s the paradox key to success. Always remember that typing is not the bottleneck, otherwise we could simply pursue faster keyboards for you. Giving your mind the right amount of slack, saves you from paralysis. Rushing something to success is merely the solution of a novice. Taking the time for steady progress now, rather than a full bloat of erroneous features that you need to fix later, will not help in the long run.
Do it Right
Never stop practicing the things you know that work. Don’t stop testing first. Don’t stop refactoring.
How come we always have the time to do it over, but never the time to do it right? Going slow is the first step in order to make it right. If the software does not work, you can hit any other requirement, but it’s the “getting it to work” that’s the hardest part. Similarly, if your suite of automated checks has a lot of false positives, that is tests that pass, though they shouldn’t, then you will never know whether you can successfully deliver your software, or not. Make the tests and the checks right, or don’t make them in first place. Anything worth doing, is worth doing it right.
Be thorough. Be certain you are producing the best work that you are capable of through a steady and unflinching practice of virtues.
Taking the time, and making it right, are the first steps in order to being methodical. If you give in time pressure, you are less likely to work methodical and might fall back to cowboy-coding. This does not help. Instead, remember what you learned, the TDD-cycle, the ATDD-cycle, don’t try to run before you walk. Through this you will reach steady progress that helps you and your customers.
A Craftsman knows his own productivity, and can gage how much he can get done in a certain time span. Making lists, getting metrics, and measuring your productivity will help to accomplish this.
The heart of Agile is to make progress visible. Measuring anything worth measuring, in order to prepare an informed decision later. Craftsmanship takes this a step further. In order to know how much effort a particular project may be, you need to have data from your past. Creating lists based on past experiences may help you with this. Over time you will be able to see patterns, and might no longer find the lists useful. By then you may have reached a new level of mastery – the Ri-level in the ShuHaRi model of skill acquisition or the Expert stage of the dreyfus model.
Understanding the Real Business Intent
It is important to remember you aren’t just writing code to fulfill some requirement on a note card, but you are actually creating a product that a business intends to use. Try to understand what that note card means to them as you transform the requirement into a feature.
In order to do the right thing, you have to know what the right thing is. Understanding the business behind your work is essential for any knowledge worker. This includes developers, testers, business analysts and technical writers. Without the vision of your work, you’re doing useless work. Remember the Product Principle from Weinberg’s Quality Software Management series: Anything not worth doing, is not worth doing right. Since you’re doing everything right, avoid doing anything not worth doing in first place. Of course, in order to make this decision, you need to know what your customer’s business is asking for.
Recognize your failures
Recognize your failures of the past so that you can move forward on a new level.
Taking the time for reflection is essential. Self-blindness about own failures does not help you grow. Therefore seek opportunities to realize your failures and learn from them. We all make errors at times. The aspect that makes us human, is the fact to learn from our mistakes.
Don’t be shy with your ideas
Throw your ideas out there, and then be the biggest critic of them. You only stand to gain when others see your ideas (unless youzz crazy).
Sharing your ideas is essential for the community to grow. Pay back what you get from the community by bringing in your own ideas. Of course, they might be subject to critics, but these critics will help you grow, learn from the mistakes you make, and contribute to your professional growing over time. If you’re shyly hiding your ideas in your mind, they’ll be gone when you’re gone. The other way round they may even survive.
Lessons always come at a cost
Stay positive, because if you are getting punished by your mistakes, remember that you also learning from your mistakes.
There is no such thing as a free lunch. At times you will wake up and see you have painted yourself into the corner. It’s essential to take course corrections at that time, though this might mean to switch your jobs. It’s a worthwhile lesson, but it has to come at a price. At times, you may have difficulty paying the price, though it’s the most sane thing to do. Remain positive about these changes.
Mary Poppendieck and James Bach taught me to look out for lessons in everyday life. Everything is connected with everything else. So, as I visited today during lunch break the local McDonald’s restaurant, I started to observe while waiting for my lunch. Here is what I observed filled in with some lessons from systems-thinking and the Theory of Constraints.
So, the restaurant was unusually full today. There were three counters open, serving the enqueued and waiting people. Whenever there is a queue, there is something locally optimized and the bottleneck around it is not exploited. Therefore I started to look out for the underlying reasons. Since I had previously worked at a supermarket counter, I had a feeling for what to watch out for.
Starting with the first counter, there was a young woman serving. She was obviously helpless as I watched her using the cashier and giving out change. She seemed to be very uncertain and routine hadn’t revealed to her. Obviously for me that queue was a bad choice to wait for food, since I got hungry.
On the next counter, there was a young man serving. He had a bit more routine or experience as the young woman next to him, but he seemed to be rather inexperienced compared to others I had seen in previous visits. In addition that guy seemed to be in the role for catching up on serving the food to the tables, exchanging fries when they were finished, etc. So, this guy was really overworked, and I could sense it by observing him for maybe two minutes. No good choice to wait for, when you’re hungry, either, but I took the queue, since the colleague who went with me was on the last open counter waiting, and I found it a better mix to have us wait in two different lines.
On the last counter, there was a real senior serving. I have seen him several times there in the past, and he was really fast, compared to the others. So, my colleague got seated long before I placed my order.
Now, this isn’t yet the final story. What then happened was that the guy from the counter for the McDrive helped these three out with the cashiers and the serving. Given the fact that the drive-through is the most critical, fast-served place at such a restaurant, that guy was really up to the last cashier. Now, the McDrive guy was not helping out on every counter, but yet on a single one. So, guess where he helped out? Yes, right, on the last of the three that was already way-outperforming the other two. If I had to put a number on it, that counter had double the throughput of the other two counters without anyone helping there.
Why is this a problem? Well, after I got finally served by the last counter – the queue was empty right before I got to order at the second counter – I got to my colleague and explained him the following. There were basically three queues. There was one bottleneck station, the first counter, and the second counter had generated a bottleneck station himself by conducting all the work that the first woman was not getting to, since she was too slow at serving. Last, there is the regular station, which is not a bottleneck.
From the systems point of view the regular station got optimized when the McDrive guy started to help out. The Theory of Constraints taught me that instead everything should have worked towards the bottleneck stations. This could have been to get the McDrive guy helping the second counter guy out on the surrounding preparations with the fries or serving the food to the tables. This could also have been to free up some of the woman’s time by fetching the just ordered food from the customers in the line, so she mainly needs to take care about the coins and the change. Instead a non-bottleneck workstation got optimized, resulting in local optimization.
Another reason this optimization choice was bad, is the fact that the two others did not have the slack to learn how to work faster. The worst performers in that restaurant were constantly under pressure, since there were more and more customers queuing up all the time. Instead of providing slack time for them to be able to think about how to do their work more efficient, their thinking collapsed under the perceived pressure from the customers waiting in line to get served. Surely, they will not have learned anything during these maybe 15 minutes I could watch them, since they were permanently dealing with the next fire to take out.
Finally, how does this relate to software at all? Well, give your people the right amount of slack, the right amount of time to think, and watch out for local-optimizing. Alistair Cockburn uses the analogy of “unverified decisions” as inventory items to transport Lean processes and lessons from the Theory of Constraints to software development. So, find the workstations where these “unverified decisions” pile up. Where in your process are many decisions handed-over to someone else? Where are the most handed-over? These are your bottlenecks. After having identified the bottlenecks in your process, you may start to either exploit them, or to place in quality steps right before the decisions get handed-over (this is the major case for test-driven development), or see how others can bring in some relief for the bottleneck station by taking over some of their work. Taking the problem to the systems view helps understand the dynamics in place – for software development as well as for food serving in a fast food restaurant.