Beware of your Green Field Messes

This week I paired together with a colleague from another team. We were reworking one of our approaches and had started with implementing one of the class responsibilities anew. We wanted to improve the performance on the overall test suite and needed to take a look into several classes in order to get to know where the seconds and milliseconds got lost. I already knew there was one class which violates the Single Responsibility Principle and that I was striving looking into the code of that class since several weeks. Basically I was considering two options: Either we would start rewriting the major classes around or tests anew in order to improve the performance or boil down the technical debt, that caused our performance problems in the existing code.

We decided to start with the first option. Therefore we spent all Monday on some Ping-Pong-Programming for the new class we were building. When we tried it out, from functional point of view it worked. But when we wanted to incorporate our new class in the old code, we realized that this will be a major change. Therefore we started to take a look into the code. Soon we would realize that there were some simpler problems in the old classes and fixing them would be easier than the re-writing over all.

Some weeks ago a blog entry from Robert Martin had made myself aware that I would need to clean up my mud myself. Anyways I had to make this experience myself to realize that his statement was absolutely wrong. Hopefully I will be smarter next time.

Software Testing Craftsmanship influence on personal development plans

Since I will be attending the Software Craftsmanship Conference in London on 26th February 2009, I started to making up my mind on what I believe to be in the field of Software Testing Craftswomen and -men. This entry is a follow-up on Skillset Development, which I came up with earlier. Since I wrote the just mentioned article, I started to think about the Craftsmanship aspect of Software Testing. While I previously just thought on development of a skill in ShuHaRi terms, Craftsmanship describes a combination of skills as apprentice, journeyman and master. (Usually I compare this with Star Wars Jedi ranks: Padawan, Jedi Knight and Jedi Master, but I don’t know if George Lucas has the copyrights on these terms.)

This year I took some time with my two sub-workers and discussed a personal devleopment plan with each of them. The results seemed to be successful for both of them, so I decided to share the approach we took. Basically I began with the introduction of ShuHaRi for individual skill development, also mentioning the part of the interrelationships – there is Shu in the Ha stage and both Shu and Ha in the Ri stage. For visualisation purposes I painted three circles in the inner-most I wrote Shu, the middle one I denoted with Ha and the outer-most circle was labelled Ri.

Then I began to introduce the craftsmanship aspect of our business. I wrote down on a opposed side of the whiteboard the terms apprentice, journeyman and master and introduced that we met in order to get a common picture where the individual colleague would classify himself and what we could do in order to move the classification down in the next year, i.e. from journeyman to master on the scale. While the classification seemed to be not very obvious and fluent, it gave us a good rule of thumb in the process to follow.

After introducing my view on what targets we could settle for a personal development plan for each of the two, we divided the work we do in our test group into subsequent groups, which we identified on a individual basis. Here are the main categories we came up with for both:

  • Technical skills
  • Leading skills
  • Collaboration skills

Please note that I consider these categories to be team-, culture- and organisation-dependent. On the technical skills we came up with test methodology as discussed by too many books out there and programming skill for test automation. During the past year we introduced the Framework for Integrated Tests in our team. Since we could not build upon much support from our development team, we had to write the fixture code ourselves and I introduced many aspects from test-driven development, design patterns and refactoring combined with pair programming and continuous integration. Therefore the programming skills were included in both sessions.

For the leading part I described that I see two parts for applying leading skills in our group. Either on the technical lead within our project structure or as a group leader of the group. Since the latter job is quite taken by me, the opportunities for technical lead remained. While I put one of my colleagues into a major project as a technical lead for the testing part, this topic was very interesting for him. My other colleague told me that he thinks he is far enough on the technical side to become a leader there. That seemed reasonable to me, so we did not spend much time on this topic.

The last main category just covered skills you need in order to collaborate with people outside our group. This includes for example presentations on topics, working together with people from the development team and project management and etiquette. This topic I found a bit difficult to manage during the sessions, but we made it through.

After discussing each sub-items we would like to take a look on, we first of all identified where each of the two was currently located based on a categorization of ShuHaRi. Jointly we went through each major category, identified topics we would like to consider and wrote one to three things down together with a good guess where the individual saws himself. After getting the current development status, we were ready to review the list and identify steps to work on in the current year. For each previously mentioned item we discussed whether we need to improve and what we can do about this. On some of the topics we were able to identify two to three items we would like to try out. All this was written down on a flipchart page, which we took afterwards with us.

When we were finished with each individual list, we agreed on a review date for the list. The more junior colleague suggested to take a look back on it after 3 months, my other one suggested 6 months as a period. After talking to each of them, we additionally agreed to hang the flipchart out in our office right at their desk as a reminder in our day-to-day work.

I would appreaciate feedback on this approach in the comments. If you have some thoughts on how I could improve this process, I would be glad if you wrote down a note on this. Hopefully I will be able to come up with a list of skills I see relevant for a software tester in some more months by following this approach.

Impressions from a Black Belt Testing Challenge

Two weeks ago Matt Heusser asked for participants for a Black Belt Testing Challenge. Since I could not stand the challenge, I wrote him an E-Mail that I would like to take part on this challenge. The challenge consisted of watching a video and to defend my personal view on it. When I wrote him my reply to the video and how I see the things around the test automation strategy shown in the video from a technical talk at a conference, we started to combine each others thoughts on this. By doing so we each other realized that there is even more to testing than one might got to know from the experiences in the past. From my point we were hit by the realisation of the context-driven testing approach as lately made clear by Cem Kaner.

In the end I was happy to read this line from Matt:

You are the first person to successfully step up to the challenge!

Technical Debt applied to Testing

During the last two days one thing puzzled me. Yesterday I had our annual review meeting at work and during that session one topic came to speak, over which I happened to think the whole evening. Since I had read that morning on Sean Lendis’ blog about Technical Debt, the resolution I came up with, was that an occurence of something that I would like to define as Technical Test Debt Maybe someone already gave a different name to this, maybe it’s just unprofessionality, but I’ll call it Technical Test Debt for now. In order to introduce Technical Test Debt, I start by revising Technical Debt. Here is a quotation from Ward Cunningham’s site which helds the first definition:

  • Technical Debt includes those internal things that you choose not to do now, but which will impede future development if left undone. This includes deferred refactoring.
  • Technical Debt doesn’t include deferred functionality, except possibly in edge cases where delivered functionality is “good enough” for the customer, but doesn’t satisfy some standard (e.g., a UI element that isn’t fully compliant with some UI standard).

A key point I see fulfilled around Technical Debt is, that it seems to occur on Agile teams as well as on non-Agile, Waterfall, V-Modell-, whatever-teams for the development part. One thing for sure is difficult to do: Measure Technical Debt. James Shore came up with a definition in the last year, but that did not fully satisfy me so far. To quote Martin Fowler here:

The tricky thing about technical debt, of course, is that unlike money it’s impossible to measure effectively.

To give my point a little bit more context, in the past I was just involved in test automation work at my current company. We have a high attitude to automate most of the tests we do in order to run them several times before the software gets to the customer. For some hard-to-test components however we decided during the last year to not do this and just rely on manual tests. We ran into a problem when we were faced with the human factor of software development. Our Release Management group, which provide the software packages to the customer, packaged some older version of that stand-alone component we just tested manually. Since we were not expecting the software to change, we had not run those manual tests before bringing the patch to production. Due to confusion while generating the package, it contained too much, where one software component was outdated and led to a production problem – which gladly could be fixed by some high responsible developer. My resolution here yesterday was, that I had come across Technical Test Debt. Since we were not expecting the software component to change after the first approval from our customer and test automation seemed to be a high cost at the time we made the decision, we refused to do any automated test cases, that would verify the software for each delivered package. We were pretty aware of this issue, but we did not take the time and effort to pay off this Technical Test Debt and have the software component forseen with automated test cases, that would run over each of our created package, before screwing up the production system.

Another thing I would like to call Technical Test Debt concerns test automation as well. During the first days at my current company, we used shell scripts for test automation. There was a quite impressive set of shell-script functions, which in combination automated most of the work for regression tests. The debt here is similar to Technical Debt in software: Since no experienced designers were assigned to the group in order to lead future improvements into a manageable direction, the shell-function codebase grew and other the rushes of several projects noone really got the time assigned to pay off this Technical Debt in the test automation codebase. After some internal reorganisation my team took the hard line and exchanged our legacy shell-function based test automation framework through FitNesse. By doing so we managed to gain an order of a magnitude improvement by keeping Technical Debt in the automation codebase as low as possible. We used regular refactoring, unit tests to communicate intent and pair programming as key factors there, but I also see some short-cuts currently on one class or the other, where I directly know, that this Technical Debt was introduced by the first point from the list above.

While I’m pretty much convinced, that the first of my two points is clearly Technical Test Debt, the second point is discussable. I like to define test automation as a software development effort with all the underlying assupmtions on the automation code. While picking this view, I would say that the second point is just Technical Debt, that occured during a software test automation. Another thought that strikes me on the first point, is that Technical Test Debt in terms of unmade tests, might have high risks attached to it. That’s why a tester needs to know about risk management in the software under test.