While Diana Larsen was in Germany in July she spoke about a course she was currently taking called Human Systems Dynamics. Since then some of my colleagues started to dive into it. So did I. I didn’t take the course, but decided to go for some of the books on it. The first one I came across is called Facilitating Organization Change – Lessons from complexity science, and deals with a lot of stuff on complexity science, self-organization, and how to introduce changes in a complex adaptive system (CAS). These are some of my first thoughts after finishing the book.
Continue reading Lessons from complexity thinkingCategory Archives: Methodologies
Methodologies
Software Development Lessons from observing a movie team
A while back I was part of a team that made a small advertisement movie. We planned this back in June, and finally made the shots in early September on a single day. There were quite some lessons I learned from them, and how they worked. Even though I didn’t get to work with Steven Spielberg, I was still able to see parallels in the movie profession to my life as a software tester and software developer. Time to reflect on that.
Continue reading Software Development Lessons from observing a movie teamSoftware Craftsmanship is real in Germany
You know what happened on the March 7th 2009? According to Wikipedia, the Real Irish Republican Army killed two British soldiers and two civilians, the first British military deaths in Northern Ireland since The Troubles, and the Kepler space observatory, designed to discover Earth-like planets orbiting other stars, was launched. Right. But there was also something happening in the software development sphere. You don’t remember? The Software Craftsmanship Manifesto went public. Amazing.
Paul Pagel announced the publication on the Software Craftsmanship mailing list using the following words:
Craftsmen,
In December, many of us met in Libertyville at the first ‘craftsmanship summit’ with the intent of establishing a set of principles for Software Craftsmanship. After the summit, the conversations continued on the mailing list, finally culminating in an elegant summarization of our thoughts by Doug Bradbury. Today, we put the manifesto up on the web as a statement of our values as craftsmen. I’m inviting you, as members of the software craftsmen community, to view the manifesto and, if you choose, to sign it.
http://manifesto.softwarecraftsmanship.org/
This is an exciting time for all of us, a result of the participation and thoughts of everyone, both at the summit and in the subsequent discussion online. I want to thank everyone for carrying these ideas forward.
Best,
Paul Pagel
About one week later, Mark Levison had interviewed some of the folks who were involved in the early days of the movement, among them was Micah Martin who found the right words when he saw that 1500 people already had signed the manifesto after just one week:
As of today, there are over 1500 signatures on the Manifesto. 1500 people are fighting against “crap code”. Those who have been fighting “crap code” now know that they are not alone in their fight. Those who write “crap code” now know that there are 1500 people fighting against them.
The Manifesto is a gentle push away from “crap code” and toward craftsmanship.
That quote immediately sprang to my mind last Saturday when I head home from the first German Socftware Craftsmanship and Testing (un)conference. With 52 participants from at least five different countries, we had 52 people getting together in order to carry on that message, in order to fight crappy code.
No wonder then, that we had a lot of coding sessions. Sandro Mancuso for example led through a session using the Nine Rules from Jeff Bay’s Stefan Roock led through several coding dojos on the OpenSpace day.
But beyond that, I was glad to see other topics that are sometimes forgotten when speaking about craftsmanship. In fact, if you take a closer look on the manifesto, it states that we want to raise the bar by growing communities, and helping others learn what we do, and how we do it. That said, there was a talk by Fabian Lange on performance, one by Uwe Friedrichsen on architecture, one by Pierluigi Pugliese on soft skills for developers, and my session on self-education in testing. We also got together in a fishbowl on software craftsmanship in order to seek opportunities to go further from the starting point of SoCraTes, and grow communities all over the country. We had Uri Lavi who leads the user groups in Isreal as well as Sandro Mancuso who runs the London Software Craftsmanship user group.
In a call to action on Saturday we decided who was interested in running a meet-ups in the next two months in different locations all over Germany. Just in case you live nearby Hamrbug, Osnabrück, Münster, Bielefeld, Düsseldorf, Köln, Karlsruhe, and Frankfurt keep your eyes and ears open. There’s probably soon someone starting something, announcing something, or just getting together in a bar to discuss the topic of Software Craftsmanship, and what it means to us.
I was really impressed. Not only having been around in those early days on the Software Craftsmanship mailing list back in December 2008 when it all started, but also being in the loop of discussions around the Ethics of Software Craftsmanship, having contributed to the Wandering book, and having helped organizing this event (please send the real kudos to Andreas Leidig and Nicole Rauch who mostly put together this conference alongside Bernhard Findeis, Martin Klose, Marc Phillip and myself), it was so amazing seeing this become real, and kicking off for next actions to grow a community in Germany.
I look forward for more to come after this inaugural get-together of the German Craftsmanship community, and will surely report on the things that are going to happen.
ALE2011: 10 years of Agile – Are we there yet?
At the ALE 2011 conference, Rachel Davies held the first keynote of the day. She reflected on 10 years of Agile software development, and what this probably means for the future.
Continue reading ALE2011: 10 years of Agile – Are we there yet?Improvement vs. Radical Change
In the Kanban, Scrum and Lean hemisphere there is a continuous discussion about the radical change nature of Scrum vs. the evolutionary change method of Kanban. Both are often referenced as Kaizen or Kaikaku in Kanban. But what’s the difference? When do I need one or the other? And what does this say about change in general?
Kaizen
Kaizen is about improvement. Retrospectives in the Agile methodologies help to foster continuous improvement. After a limited time the team gets together and reflect on their past few weeks. In the original methods retrospectives where bound to the iteration limit, like one to four weeks. With the aspiration of iteration-less methodologies like Kanban retrospectives get their own cadence, and don’t necessarily fit the planning boundary.
Retrospectives help to improve one to three things that didn’t work well. Ideally applied the actions from a retrospectives help to change the development system just a little bit. In complex systems such changes may have unpredictable consequences. This is why we restrict changes to one to three items. If we try to implement more changes at a time, we are likely to completely turn the underlying system around, thereby getting an unpredictable system.
Over time such little changes eventually lead towards a system which gets stuck. If you keep on improving a little bit time after time, you climb yourself uphill towards a local optimum of improvements. Once you picked a particular route on that journey, you might find yourself on a local optimum besides too higher mountains. But how do you get from your local optimum to a higher optimum?
Kaikaku
This is where Kaikaku comes into play. If you got stuck in a local optimum, you may have to apply a radical change to your system in order to improve further. This comes with a risk. Once you set up for a radical change, you will get another system. Like introducing Scrum to most organizations comes with the effect of radical change. Where does my project manager belong in the three roles of team, ScurmMaster and ProductOwner? How do we integrate our testing department into the development teams? These are rather larger steps.
Comparing this with the landscape of mountains, Kaikaku is a large jiggle, like an earthquake, which might throw you onto a completely different area of the landscape. This might mean that you find yourself on another mountain top afterwards. This might also mean that you find yourself in a valley between two larger mountains. This might also mean that you end up in the desert of the lost hopes.
This also explains that too much radical change eventually leads to an uncontrolled system. Since you keep on jumping from left to right, you never settle in order to get a stabilizing system in which you can apply smaller improvement steps. In fact your system might totally collapse by too much Kaikaku.
This also explains that you should seek for the right mix of smaller improvements and larger radical changes. Once you get stuck, try to reach for a larger step, but just one at a time. From that new ground start to go uphill again by taking smaller improvements – one at a time. Over time you will eventually end up within a system that is continuously improving itself.
Some Software Craftsmanship treasures
While reviewing some proposals for the SoCraTes (un)conference, the German Software Craftsmanship and Testing conference, I wanted to look something up in the principles statements that we came up with back in 2009 shortly after writing down the manifesto. Unfortunately I found out that Google Groups is going to turn down files and pages inside groups, and you can just download the latest versions of the files and pages now.
After downloading them, I found some treasures, which I would like to keep – even after Google took down the pages section in their groups. So, here it is.
Software Craftsmanship Ethics
I was involved in the discussion that came up to identify the principle statements similar to the Agile manifesto and principles there. It was Doug Bradbury from 8thLight who constantly tracked what the other twelve people on the mail thread were replying, and derived something meaningful out of it in the end. I don’t recall why these principles – which we later called the ethics – were never published on the manifesto page, but I think it had something to do with the discussion on the mailing list after we announced the final draft for discussion. (I obviously didn’t take the time to follow that discussion. There were too many replies for me to keep track.) So, here is the final version. Interestingly we saw the four main categories also in the four statements of the manifesto.
The Software Craftsman’s Ethic
***DRAFT*****
We Care
We consider it our responsibility
to gain the trust of the businesses we serve;
therefore, we
take our customer’s problems as seriously as they do and
stake our reputation on the quality of the work we produce.
We Practice
We consider it our responsibility
to write code that is defect-free, proven, readable, understandable and malleable;
therefore, we
follow our chosen practices meticulously even under pressure and
practice our techniques regularly.
We Learn
We consider it our responsibility
to hone our craft in pursuit of mastery;
therefore, we
continuously explore new technologies and
read and study the work of other craftsmen.
We Share
We consider it our responsibility
to perpetuate the craft of Software;
therefore, we
enlist apprentices to learn it and
actively engage other craftsmen in dialogue and practice.
Original Software Craftsmanship Charter
In the early days we were struggling on how to get started. Back in November and December 2008 we collected together some statements from the books that we felt strong about. In the archive, this is kept as the original Software Craftsmanship charter. Later some of these statements were turned in for the manifesto and the principles. You can already see the structure of the final manifesto in there, but it’s still merely a brainstorming list. Here is the version from the google groups pages:
Original Software Craftsmanship Charter
Raising the Bar
As an aspiring craftsman/professional,
… we can say no– Do no harm
… we can work in a way we can take pride in.
… we take responsibility for the code we write
… we believe the code is also an end, not just a means.
… we follow a strict set of practices and disciplines that ensure the quality in our work
… we live and work in a community with other craftsmen
… we will help other craftsmen in their journey
… are proud of my portfolio of successful projects
… can? point to the people who mentored me and who I mentored
Here are some of my suggestions: (DougB)
As aspiring Software Craftsmen we are raising the bar of professional software development.
??? We are proud of our work and the manner of our work
??? We follow a set of practices and disciplines that ensure quality in our work
??? We take responsibility for the code we write
??? We live and work in a community with other craftsmen
??? We are proud of our portfolio of successful projects
??? We can point to the people who influenced us and who we influenced
??? We believe the code is also an end, not just a means.
??? We say no to prevent doing harm to our craft
My suggestions: (Matt Heusser)
? We take responsibility for the code we write ++
? We take responsibility for our own personal software process(*)
? We take responsibility for the outcome of the process
???? That is to say, a laborer delivers software on specification
???? A craftsman develops a solution to an interesting and ambiguous problem in a way that delights the customer
(*) – not the one owned by Watts Humphries
List of questions
Someone (I forgot who) mentioned a list of interview questions from the 8thLight office. They held some of the inaugural software craftsmanship user group meeting back in December 2008 there in Chicago, and eventually crafted together a basis for the manifesto. One of the attendee wrote down some interview questions which were floating around there. Here is the list:
List of Questions
- Do you follow a particular process to create your work?
- What tools have you built to enhance your work?
- When do you stop re-factoring and enhancing your code?
- What are your training techniques?
- How much time do you spend per week coding outside your main job?
- How do you react when you discover a bug in your own software?
- What are the first things you would teach a new apprentice?
- How many languages do you know and can use consistently in the same project?
- What are your most important influences in the programmers community?
- Who is the best developer in the world in your opinion?
- What makes you passionate about software?
- Who else would call you a craftsman?
- Do you consider your self involved with the software community?
- Can you deliver consistent results in your code?
- Can you define what good code is?
- Can you point to some source code that you consider a masterpiece?
- How do you react to something that you are forced to ship but is not consistent with your practices? (for example not tested?)
- How do you stay current with industry standard?
- Would you go back to a past customer project to fix your own bugs?
- How do you define aesthetics and pragmatism in software?
Final words
So, having put these artifacts from the early days of software craftsmanship on my blog, I hope they won’t get lost. I still hope that the ethics statements we came up with will make it to the manifesto page one day, but until then I can reference this blog entry.
“Fully automatic software testing now possible” – Really? Hmm? Soooo?
Part of the gap between computer science as taught at universities and software development as done in our industry is what Alistair Cockburn lists as one of the early methodologist errors: I did this, now do what I did, and you’ll be happy ever after. This notion is not only disrespectful for the achievements that our industry has come about, but it also lacks the particular difference between lab situations and the context of software development in software development shops all around the globe. This is nothing I came up with, but an observation I made while teaching anything. The first reaction people have to something new that’s imposed on them is “but this does not work for me” until you show them how that’s possible – and find out yourself that the combination of Spring, SOA, JBoss, GWT, SWING, and Ruby or even any other combination of buzzword technologies from the past two decades come with their own pitfalls and fallacies, and your beloved approach ends up being useful. In fact a while ago James Bach claimed that Quality is dead due to the unmanageable stack of technologies and abstractions our industry has to deal with. I would even go further and claim that no one will be able to handle the Y10k problem in eight-thousand years if we continue with this.
One of the fads that seems to be reappearing is the idea of automating away humans from the software development work at all. This fad came up with the rise of UML, and the most recent fad appears to be model-based testing. One of the interesting things I noticed is the ignorance of other past movements. It seems that universities keep on bringing up new talents from time to time that claim to save the world because universities favor a particular competition-based learning model in which everyone wants to be the next hero. Of course this is garnished with some flavor of Pandorra’s Pox:
Nothing new ever works, but there’s always hope that this time will be different.
(The Secrets of Consulting, Gerald M. Weinberg, Dorset House Publishing, page 142)
On a side-note the same author just recently wrote about this ignorance to past experiences in the context of development models like structured programming or Agile.
Up until now my hope was that model-based testing was a fad that would disappear quickly with industry leaders ignoring it completely to start with. But it seems that the hope that model-based testing will be different keeps on re-occurring besides the voices of highly-skilled consultants in our fields – for example take this blog entry from James Bach, dated 2007. Four years have passed since then.
I decided to ignore this fad for as long as possible, but this morning I read about model-based testing in a way that made me angry. Of course, the problem is not model-based testing in itself, but my reaction to what was written on that particular webpage. Fully automatic software testing now possible is the title of this webpage. So far, I have criticized some of the work on model-based testing. These discussions almost always ended up in an agreement that you shouldn’t base all of your testing efforts on model-based testing (MBT) but combine it with other approaches as well. I am fine with this. If MBT serves you better than another approach for a particular effort, go ahead. But I wouldn’t invest all of my money in a single stock option. Or run a consultancy with a single client. In fact, most countries won’t allow you to start a consultancy if you got just a single client. Think about why for yourself.
But the article – no, I wouldn’t call it article, it’s rather a marketing piece for a particular company – seems to fully ignore such contextual considerations. Let’s take a look at some sentences before I explain the pitfalls of model-based testing, and what to do instead.
The system not only facilitates quick and accurate software testing, but it will also save software developers a great deal of money.
Really? Every software developer in the world? Even those who are highly performing using TDD and Specification by Example? I doubt this. Since there are no data mentioned at all this claim is hard to justify. And in the end, how much is “a great deal of money” in first place? A great deal for my private life would be 100.000 Euros, but a great deal of money for an enterprise surely looks differently.
Our automated method can improve product quality and significantly shorten the testing phase, thereby greatly reducing the cost of software development.
And if I don’t treat testing as a separate phase to start with, what happens then? Also notice that their understanding product quality is not defined. (On a side-note I added the word “quality” to my set of lullaby language terms.) So, what does an improvement in product quality mean? How would I notice it? Connecting the first quote with this second, it also appear that shortening testing is correlated to the costs of software development. This violates three fallacies from Jurgen Appelo’s Management 3.0 class at once:
- Machine Metaphor Fallacy – Don’t treat organizations as a machine (organizations are complex systems).
- Linear Behavior Fallacy – Don’t assume things behave in a linear way (things behave complex in a complex system).
- Unknown-Unknowns Fallacy – Don’t think you have covered all risks (the Titanic effect will hit you if you do).
The testing phase for new software consists of three steps: developing the tests, running the tests and evaluating the results.
I think this belief is the reason why testers are treated like monkeys, and software testing keeps on being outsourced to underpaid countries where the power system still has outages. What’s missing in this list? Think about it. There is one key ingredient missing, and I have not seen anyone trying to automate this key ingredient. You’re with me? Yes? Exactly! It’s the learning. Test execution and test results lead to learning when executed manually, or when supervised. This learning informs the development of the next test – in one way or the other. There are learning approaches that could work for some particular part of software testing. For example there are systems in place which learn risky code changes based on changes that introduced bugs in the past. But this addresses one of many risks in software development.
When used properly, the method completely eliminates the need for manual software testing
Besides the hint to “When used properly” (people are not very disciplined at using something properly), I sincerely doubt that the need for manual software testing will be eliminated by their system. Proof me wrong. Please.
Model-Based Testing has a number of major advantages: it makes the software testing process faster, cheaper and more accurate.
I’m speechless about such statements. Speechless except for: “compared to what?”
It is not uncommon for manual software testing to take anywhere from several months to years.
Finally, a hint to the problem they are trying to solve. So, manual testing takes too much time appears to me to be the problem that MBT addresses. But I don’t see a clue why testing is taking so long. Matt Heusser taught me a while back that this is an impossible to answer question. “So, testing should take fewer time. Which of these risks should I leave unaddressed with my testing then?” is a more congruent answer to that question.
The pitfalls
One of the pitfalls for MBT if sold with the sense of humor as in the above statements – and I really hope they don’t mean these statements seriously – is the belief that I covered all risks. My favorite quote from Jerry Weinberg on this is the Titanic Effect from Quality Software Management Volume 1 – Systems Thinking
The thought that disaster is impossible often leads to an unthinkable disaster.
If you look for such occurrences you don’t have to go as far to the past as the Titanic incident. Take challenger, volcano eruptions or Fukushima – just to name a few.
But words can describe so much. Let’s run an experiment. I run through this experiment a few months back when I approached Michael Bolton to explore MBT. So, this is really his exercise. Let’s apply model-based testing to a simple website for children. Here is the Horse Lunge Game. Develop a model for this game. I will wait until you come back.
Finished? Already? Maybe you should look deeper. Go once again over your model to see if you left something out.
Alright, this should be fine right now. Now, how many bugs did you notice which were not part of your model? None? To get an idea watch the fence in the background. Is it floating continuously at the same pace, or is there a jump in it? Part of your model? Really? If this didn’t convince you yet, does your model incorporate that none of the horses should have wireframes? No? Well, does any of the horses has wireframes?
My point is that the notion of MBT as presented on that website I linked earlier (I won’t do that again!), ignores the impossibility to test everything. Usually I refer to testing a compiler at this point. Testing a compiler completely would mean that I run a test for each program this compiler will compile – ever. So, I must digest any program that is in existence for the particular language that I compile, (even my compiler might be written in that language) and any program that will be invented in the next – say – twenty years in that language. Then I run each and every program through this compiler, and check the results. This does not only hold for compilers, but for frameworks in general as well. (Side note: Frameworks are the new programming languages.) This is not even possible. Now, with MBT I would have to create a model for each program that is going to be created in the future. So, I would need to invent a time machine first. Maybe science has found a way to time travel, and MBT is a hint to this. But I doubt it.
This leads me to another point. How much does it cost to create the model? How much does it cost to maintain the model? How long does it take to run the millions of tests referenced in the text? There is no statement about this. Our industry does not even agree on how many tests to automate or what the return on investment (ROI) for software test automation is. And now we are faced with a different question based on MBT which is what the ROI of test automation AND model-based testing will be. These are two variables in a complex formula. Do you see now why I don’t believe that I can save a bunch of money just by applying MBT?
A different approach
An approach that works for me is to apply ATDD or Specification by Example in combination with session-based Exploratory Testing. Using Specification by Example helps me derive specifications for the software from the people that matter. Session-based Exploratory Testing helps me to grow a mental model of the software, and cover the risks that are relevant at the time. I don’t claim that I can cover all risks in the time I got. Instead I can cover the risks that are most meaningful right now. Remembering Rudy’s Rutabaga Rule (Secrets of Consulting, page 15):
Once you eliminate your number one problem, number two gets a promotion.
in a complex system like software development this seems to be an approach that brings more value. And besides that, Exploratory Testing has a high emphasize on learning which MBT appears to neglect completely.
A week with Kent Beck
In November I had the opportunity to stay a whole week with Kent Beck. it-agile GmbH invited him for two courses – Responsive Design and Advanced TDD – and one workshop to Hamburg, Germany, and I took both courses and the workshop. Today I was contacted by Johannes Link who was surprised not to find a write-up of this week on my blog. It turns out somewhere during the past year I have turned into a reporter. So, here is my summary from what I could get from my notes. Initially I planned to write it via email to Johannes, but then I though why not share those comments on my blog. Maybe others are looking forward to it.
Continue reading A week with Kent BeckKanban in German
Today I attended a presentation for the German edition of the Kanban book by David Andersen at the Lehmann’s book store in Hamburg. My colleagues from it-agile, Henning Wolf and Arne Roock translated the English book to German. They gave an introduction to Kanban, and answered some questions from the roughly 100 attendees.

Software Testing Apprentices
Yesterday, Jason Gorman called for action. He made his readers aware that the state of the art of software development is poor – dramatically poor. If we continue to work and educate the generations to come on software development, we are surely set up to continue to decline our craft even further than we did. Gorman explains the educational system to be tremendously poor for software programmers, that we won’t survive in a few years from now – assuming that Moore’s Law continues to apply. Read his full call to action in his blog post Time To Look Seriously At Software Developer Apprenticeships.
Some of the points look compelling to me, considering where I have come from, and where I consider to be heading towards. After dropping out of university in 2006 holding my German diploma in my hands, I got my first job shortly after my father had died of lunge cancer at the age 58. (I interviewed with my employer a few days before he died.) That said, I didn’t know a thing about software testing, never attended a software testing course – not that there was one, nor that I would have been interested in the topic at that time – back in university. Within the first two weeks I mostly sat down to read about the product they were building, intended to get some knowledge about it in my head from the large documentation.
But it simply didn’t stick. All the time I was asking myself, how this would turn out. I needed something to play around with. So, I got introduced to the shell scripts they had built to test the software part I should be working on. Within some week I was able to run some of these tests on my own. Shortly after that I was working on the project to extend the behavior. I excelled at it, pulling in work from other colleagues when I found myself finishing my assigned tasks before the assigned timeframe.
One year later, we were working for our first client. In the meantime I had taken over technical responsibility for the tests we were running for the customization that we built at that time. My boss’s boss called me into his office, and offered me to lead a group of five starting from one week from then. I had stayed one and a half year with that company at that time, and was offered a leadership position for some of their testers. I took the job.
Two months later I attended that first formal software testing training… and I found it rather boring.
That said, how do software testers start to learn about their craft? I don’t know how all of them learn about the craft of software testing, yet I know how I learned it. I read in the evenings, and at the breakfast table. I worked on my PC at home trying to get more knowledge about the area I was confronted with, trying to find out new ways to “test this”, immediately trying them out the next day. I was mentored by my superior, I mentored some of our new colleagues during the whole four and a half year that I worked with that company. I asked testers to build something, I explained underlying concepts, and helped them reach an understanding of what I consider software testing to be. I helped to make them grow as well. Having the same lack of formal training in software testing, I think this is what I would call apprenticeship, isn’t it?
I wasn’t all too sure about this up until earlier this week. Having taking a testing challenge from Matt Heusser in early 2009, I had become a black-belt tester in the Miagi-Do school, turning later into a black-belt instructor. Over the past year I have worked with three testers through a challenge, and helped two of them go beyond their current level of expertise and skill.
One of them was Michael Larsen, the TESTHEAD. By far he exceeded my own expectations. He worked through the black-box software testing course, became a mentor in this course on his own. He also worked at a local boyscout club as a boyscout leader. During the past year he has produced the weekly podcast on “This Week in Software Testing” together with Thomas Ponnet and Matt Heusser. He also first reviewed some of our draft chapters in a book on how to reduce the cost of testing. Later he turned in as a chapter author himself, taking the opportunity to contribute to something that is surely becoming meaningful.
Now, Michael just announced earlier this week, that he is going to switch jobs. He is currently running a blog series on how he prepares himself for the new job, and let’s everyone else participate in his learning. Reminds me largely on the apprenticeship blogs I saw from 8thLight apprentices, Eden apprentices, or ObjectMentor apprentices.
But the interesting thing about Michael is, that he got this new job by steadily working on his passion for testing, continuing to grow, and pushing himself to new limits. But you should definitely read his blog entry on how he got where he is.
That said, apprenticeships don’t seem to be a new idea for the software testing world. Lacking formal training, and disbelieving in the syllabus of the certification programs out there, software testers have built a high reputation based on their apprenticeship programs, and we know how to run them. We already do Software Craftsmanship to educate our peers. Let’s continue this, and intensify it even further. Maybe that could be a nice goal for 2011.