Testing inside one sprint’s time

Recently I was reminded about a blog entry from Kent Beck way back in 2008. He called the method he discovered during pairing the Saff Squeeze after his pair partner David Saff. The general idea is this: Write a failing test on a level that you can, then inline all code to the test, and remove everything that you don’t need to set up the test. Repeat this cycle until you have a minimal error reproducing test procedure. I realized that this approach may be used in a more general way to enable faster feedback within a Sprint’s worth of time. I sensed a pattern there. That’s why I thought to get my thoughts down while they were still fresh – in a pattern format.

Testing inside one Sprint’s time

As a development team makes progress during the Sprint, the developed code needs to be tested to provide the overall team with the confidence to go forward. Testing helps to identify hidden risks in the product increment. If the team does not address these risks, the product might not be ready to ship for production use, or might make customers shy away from the product since there are too many problems with it that make it hard to use.

With every new Sprint, the development team will implement more and more features. With every feature, the test demand – the amount of tests that should be executed to avoid new problems with the product – rises quickly.

As more and more features pile up in the product increment, executing all the tests takes longer and longer up to a point where not all tests can be executed within the time available.

One usual way to deal with the ever-increasing test demand is to create a separate test team that executes all the tests in their own Sprint. This test team works separately from the new feature development, working on the previous Sprint’s product increment to make it potentially shippable. This might help to overcome the testing demand in the short-run. In the long-run, however, that same test demand will pile further up to a point where the separate test team will no longer be able to execute all the tests within their own separate Sprint. Usually, at that point, the test team will ask for longer Sprint lengths thereby increasing the gap between the time new features were developed, and their risks will be addressed.

The separate test team will also create a hand-off between the team that implements the features, and the team that addresses risks. It will lengthen the feedback between introducing a bug, and finding it, causing context-switching overhead for the people fixing the bugs.

In regulated environments, there are many standards the product should adhere to. These additional tests often take long times to execute. Executing them on every Sprint’s product increment, therefore, is not a viable option. Still, to make the product increment potentially shippable, the development team needs to fulfill these standards.

Therefore:
Execute tests on the smallest level possible.

Especially when following object-oriented architecture and design, the product falls apart into smaller pieces that can be tested on their own. Smaller components usually lead to faster execution times for tests since fewer sub-modules are involved. In a large software system involving an application server with a graphical user interface and a database, the business logic of the application may be tested without involving the database at all. In hardware development, the side-impact system of a car may be tested without driving the car against an obstacle by using physical simulations.

One way to develop tests and move them to lower levels in the design and architecture starts with a test on the highest level possible. After verifying this test fails for the right reasons, move it further down the design and architecture. In software, this may be achieved by inlining all production code into the test, and after that throwing out the unnecessary pieces. Programmers can then repeat this process until they reached the smallest level possible. For hardware products, similarly focued tests may be achieved by breaking the hardware apart into sub-modules with defined interfaces, and executing tests on the module-level rather than the whole product level.

By applying this approach, regulatory requirements can be broken down to individual pieces of the whole product, and, therefore, can be carried out in a faster way. Using the requirements from the standards, defining them as tests, and being able to execute them at least on a Sprint cadence, helps the development team receive quick feedback about their current process.

In addition, these tests will provide the team with confidence in order to change individual sub-modules while making sure the functionality does not change.

This solution will still provide an additional risk. By executing each test on the smallest level possible, and making sure that each individual module works correctly, the development team will sub-optimize the testing approach. Even though each individual module works correctly according to its interface definition, the different pieces may not interact with each other or work on varying interface definitions. This risk should be addressed by carrying out additional tests focused on the interfaces between the individual modules to avoid sub-optimization and non-working products. There will be fewer tests for the integration of different modules necessary, though. The resulting tests will therefore still fit into a Sprint’s length of time.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

Leave a Reply

Your email address will not be published. Required fields are marked *