The following I got from a post on design principles related to object oriented code. You can find the whole enchilada here.
Today I was surprised that – while the principle of single responsibility is rather new in the software world – this principle is known for a long, long time in the testing world. Why?
On the project I’m currently working on we got our requirements as an database dump. Our developers decided to generate the configuration – our system under test – using some scripts, extracting it from the database directly and transforming it into the configuration for the software that we will deliver.
The test team that was asked to do the test automation for this configuration then was asked to built some automated tests for use in FitNesse. What they now did was to do pretty much the same as our development and have the test cases generated from the database. This may sound like a reasonable approach.
So, where’s the problem? The problem is that the resulting tests are not aware of the context. What is the context? The context is, that we need to deliver some rating software to a customer in Brasil. The tax system in Brasil is divided by the 27 states. We have four major bunches of baselines, each with about 8 to 10 variations in the baseline price (20, 40, …). The resulting configuration consists of about 4 times 10 times 27 (= 1080) different single variations. For each single variation there are about 20 to 100 single test cases necessary. The generated test suite now consists of a major bunch of tests (> 100000), which run on an overall system level against our delivered system with a test execution time of about 45 seconds. This results in an exhaustive regression test suite which takes 24 hours to execute. Due to the payload generated the FitNesse system is not handling these tests in a stable way, causing some crashes due to the amount of test cases in the overall structure.
So, restating the situation from a different perspective: The configuration gets generated and adaptation to changing requirements just takes a few hours for the developers. Test case execution is near to be impossible. On the other hand there are high test maintenance costs attached to the tests generated resulting in a high amount of rework necessary to keep up with the development team. Delivering feedback quickly to the developers is unlikely to happen. So, what’s the reason to have this high amount of never executed test cases in first place?
This is where I get to the topic of this post: Just because you can, doesn’t mean you should. Just because it is possible to generate an exhaustive test suite for test automation, does not mean you should do this. When the resulting test execution times perform badly and even cause the test system to crash once in a while due to the high amount of tests included, you should refuse to do so.
So what should I do? Clarify your mission for the testing activities. What is the goal of your testing? Do you want to find risky problems quickyl? Then find another approach. Do you want to prove that everything is fine ignoring the approach of the development team? Then you should build an exhaustive test suite – probably. (I would suggest to look for a better approach to get feedback in 90 minutes maximum on it.) Know your context. How is the configuration generated? If errors reported by your test suite will be the same for all the different states, then you should re-think your approach. If there is a complex logic attached to generating the configuration, then probably you should consider testing that logic and just send out some tracer bulletts from end to end. What you can always do is to reconsider your current approach and check for opportunities to improve. The time that you might be able to win due to this, can be taken for follow-up testing and catching up in other activities. Maybe you can spend some more time pairing with your developers on the unit tests?