Continuing the series of questions from the CONQUEST 2010 conference in September, this is the last piece we’ll take a closer into. Today is on test automation.
During sprints there is not enough most often not enough time for test automation. Additionally during the sprint everything is variable, so that immediately automated functional test cases (not unit tests) have to change often. When should therefore functional test cases be automated on Agile projects?
This question seems to reject the premise of the problem. It is not the short time available for test functional test automation that is causing the trouble, but instead the little time the team takes to automate tests on the functional level. Particularly I would take a closer look into the functional tests the team creates, their understanding of the domain at hand, and how easy they may have access to domain experts to clarify any upcoming questions.
Personally, I would work with the team to bring this up at the next project retrospective, and seek opportunities to come up with clever ways to tackle the problem. One thing could be to try to hold Specification Workshops with the business matter experts. Maybe another option is to work with the team to get trained with functional test automation. Yet another option could be to get the team more familiar with the particular tool they are using, or with the approach. In order to achieve test automation success, the team needs to have abilities in testing as well as in test automation which is software development. The team needs to extract commonly used variables, keywords, and apply principles like Don’t Repeat Yourself, the Single Responsibility Principle, or the Open Closed Principle. The team needs to have the right skills at all these levels in order to gain test automation success.
I conducted many test audits, and nearly never have seen a project, that could make a grounded statement about achieved functional test coverage. Many test management tools assume that one test per requirement is enough (green light). Beyond that in large projects merely something about this can be found. “Functional test coverage” seems to be a mystery. Does literature exist, that provides clear guidance? What do you suggest in order to measure functional test coverage?
Instead of measuring functional test coverage, I would ask myself what I am about to conclude from the numbers. What am I trying to measure in first place? Is it a statement about the project’s readiness to ship the product, or is it to have a warm and cozy feeling that I have some numbers to justify my decision? For the first reason, I would instead try to measure how useful my software product is by showing and exposing it early and often to real users and customers, and get their feedback about the releasability of the software at hand. For the second reason, I could also roll some dice instead of trying to measure something that is of no use in first place.
Is it possible to automate the test process towards the direction of a test facility? Are there approaches or concrete examples available?
Testing is a sapient activity. Sapient means in this regard that it requires a human brain in order to perform well. At least this is the current state of theory behind testing, and Michael Bolton coined it with the discussion between Testing vs. Checking about a year ago. Since we have not been able to automate a hiuman brain and a human thought process thus far with a computer, it is currently impossible to automate good and skilled testing. That does not mean that you can not automate unskilled and scripted testing, though, but most of the time this is nothing I am particularly interested in. Therefore I don’t care whether we are approaching testing facilities with unskilled testing as long as I can show skilled and sapient testing to counter-act that.