Technical Debt applied to Testing

During the last two days one thing puzzled me. Yesterday I had our annual review meeting at work and during that session one topic came to speak, over which I happened to think the whole evening. Since I had read that morning on Sean Lendis’ blog about Technical Debt, the resolution I came up with, was that an occurence of something that I would like to define as Technical Test Debt Maybe someone already gave a different name to this, maybe it’s just unprofessionality, but I’ll call it Technical Test Debt for now. In order to introduce Technical Test Debt, I start by revising Technical Debt. Here is a quotation from Ward Cunningham’s site which helds the first definition:

  • Technical Debt includes those internal things that you choose not to do now, but which will impede future development if left undone. This includes deferred refactoring.
  • Technical Debt doesn’t include deferred functionality, except possibly in edge cases where delivered functionality is “good enough” for the customer, but doesn’t satisfy some standard (e.g., a UI element that isn’t fully compliant with some UI standard).

A key point I see fulfilled around Technical Debt is, that it seems to occur on Agile teams as well as on non-Agile, Waterfall, V-Modell-, whatever-teams for the development part. One thing for sure is difficult to do: Measure Technical Debt. James Shore came up with a definition in the last year, but that did not fully satisfy me so far. To quote Martin Fowler here:

The tricky thing about technical debt, of course, is that unlike money it’s impossible to measure effectively.

To give my point a little bit more context, in the past I was just involved in test automation work at my current company. We have a high attitude to automate most of the tests we do in order to run them several times before the software gets to the customer. For some hard-to-test components however we decided during the last year to not do this and just rely on manual tests. We ran into a problem when we were faced with the human factor of software development. Our Release Management group, which provide the software packages to the customer, packaged some older version of that stand-alone component we just tested manually. Since we were not expecting the software to change, we had not run those manual tests before bringing the patch to production. Due to confusion while generating the package, it contained too much, where one software component was outdated and led to a production problem – which gladly could be fixed by some high responsible developer. My resolution here yesterday was, that I had come across Technical Test Debt. Since we were not expecting the software component to change after the first approval from our customer and test automation seemed to be a high cost at the time we made the decision, we refused to do any automated test cases, that would verify the software for each delivered package. We were pretty aware of this issue, but we did not take the time and effort to pay off this Technical Test Debt and have the software component forseen with automated test cases, that would run over each of our created package, before screwing up the production system.

Another thing I would like to call Technical Test Debt concerns test automation as well. During the first days at my current company, we used shell scripts for test automation. There was a quite impressive set of shell-script functions, which in combination automated most of the work for regression tests. The debt here is similar to Technical Debt in software: Since no experienced designers were assigned to the group in order to lead future improvements into a manageable direction, the shell-function codebase grew and other the rushes of several projects noone really got the time assigned to pay off this Technical Debt in the test automation codebase. After some internal reorganisation my team took the hard line and exchanged our legacy shell-function based test automation framework through FitNesse. By doing so we managed to gain an order of a magnitude improvement by keeping Technical Debt in the automation codebase as low as possible. We used regular refactoring, unit tests to communicate intent and pair programming as key factors there, but I also see some short-cuts currently on one class or the other, where I directly know, that this Technical Debt was introduced by the first point from the list above.

While I’m pretty much convinced, that the first of my two points is clearly Technical Test Debt, the second point is discussable. I like to define test automation as a software development effort with all the underlying assupmtions on the automation code. While picking this view, I would say that the second point is just Technical Debt, that occured during a software test automation. Another thought that strikes me on the first point, is that Technical Test Debt in terms of unmade tests, might have high risks attached to it. That’s why a tester needs to know about risk management in the software under test.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

2 thoughts on “Technical Debt applied to Testing”

  1. I’m not sure that technical test debt is a separate entity from the team’s technical debt. The failure to run manual tests is as much technical debt as poorly designed or outmoded automated tests. I don’t like to label things as “test” as if they are separate, because it’s all a team problem.

    Right now our team is suffering from technical debt caused by not keeping our tool versions and our canonical test data up to date. Does it matter if this is test debt or regular technical debt? Are you saying the risk is different for technical test debt?

  2. That was one my intentions initially, but I did not want to go so far. You’re right, the situation I describe above is a team problem, since we are not the only ones who slept there:
    – to some portion the developers slept to look over the provided package
    – the release management team did the best they could, but seemed to have confusing instructions
    – we had no automated tests and did not exercise the manual ones
    – our one-site team (we’re not doing Agile, so we’re not all on-site) did not try out the new software well before giving it into production
    – etc.

    Anyways do I feel guilty, since I could have made a better job there by having automated tests in place in order to catch the problem before it goes into production. Back in summer we had a similar one hitting us on some other component, where different stages of tests did not find the problem, before it was in production – two times within a time span of a month – boom. Putting the team more into consideration here is a good thing and I will get back to this when at work. For the last part I already started to try out a small lessons learned from what I could learn from Norman Kerth’s book “Project Retrospectives”. I hope this will help us avoid problems in the future.

Leave a Reply to Markus Gärtner Cancel reply

Your email address will not be published. Required fields are marked *