While leading the Testing Dojos at the Belgium Testing Days unfortunately I was able to attend just few presentations. One of them were the Lightning Talks on Tuesday evening by Dorothy Graham, Stuart Reid, Hans Schaefer, Lisa Crispin, Johanna Rothman, Julian Harty and Lloyd Roden on Looking into the future.
The future of test automation
Dorothy Graham started the Lightning talks track with her view on the future of test automation. Dorothy said there had been some innovation in automation. One example is Test-driven design. Testers have pushed programmers to thinking about testing for years, now they found out all of this by themselves. Open source and free tools for test automation are yet another examples.
To go beyond what is possible in manual testing needs virtualization with multiplied environments. Automation as a service currently emerges in the cloud. Testing in Production like environments is possible using cloud-based automation. One thing Dorothy learned is Automated Exploratory Testing, which uses automation to go beyond what is possible using manual testing.
Barriers to good automation include failure to separate layers of abstraction. Domain-specific languages for testing separate testers from the testware, so that any test can write and run automated tests. Testware architecture separates the implemented tests from the tool-specific scripts. Another barrier to good test automation is management perception, which is often based on unrealistic expectations.
Dorothy Graham said regarding the future of test automation that we’re probably on the crust of a wave of more interested automation. The main thing in the next five years will be significant growth in test automation, in adapting to it, and in applying it successfully. Graham said that test automation is free (referring to Crosby’s Quality is free), but only if you are willing to pay for it. Graham concluded to point to her new book on experiences in test automation coming out this year.
Hans Schaefer said that predicting the future if different. He started to train people in 1983 in testing. Looking back he said that still 90% of the stuff he taught is still relevant. Schaefer said that during the last five years testers got accepted. Testers are now proud of their roles in testing. That was not the case ten years ago. Schaefer said there is going to be more acceptance for testing in the years to come.
Schaefer referred to education. He has been teaching software testing in universities. Testing is not part of the official curriculum in most universities. The people coming to his teachings are always volunteers. Regarding education there are programs to come to be taught at universities Schaefer claimed, referring to ISTQB courses. Another topic for the future is virtualization. With cloud-based testing, performance testing is exploited. Tools is another thing to see in the next years to come.
Schaefer finished saying that focus on people in testing will bring innovation in the next few years.
Courage of the Future
Lisa Crispin’s inspiration for her future of testing lightning talk came from the Agile in a flash cards. The topic she picked is courage. She referred to her donkey shows as a reason for her to be courageous. The donkey shows take her courage, and it serves her well.
She referred to the six courages from the Agile in a flash book. Lisa explained that she called a meeting when one of their business concepts changed from plan sponsor to plan administrator. The team decided to maintain high standards and change the terms throughout the whole code base. Thereby they threw away poor solutions. The programmers were courageous to make the corrections to the production code. In order to deliver quality work, they dealt with the problem. By inviting to the meeting, Lisa broke down the silo as a tester on her team.
Lisa finished stating that she now calls herself an Agile tester, but she hopes in a few years we will refer to us as testers calling the things we do software development.
Johanna Rothman said that there is a myth in software development that we have to utilize people 100% all of the time. Rothman stated that she started in software development when machine time was way more expensive than person time. At that time the machines were 100% utilized. Even time slots were assigned to persons maximizing utilization in the 1970s.
In the 1980s things started to change. Computer prices fell, and salaries started to look big compared to the price of a computer. Our managers being conditioned to 100% utilization, switched from 100% machine time to 100% person time. But there is a fallacy attached to this. People need sleep in order to be efficient. The issue Rothman has is that managers need to change their thought process.
Technical work is different to management work, Rothman said. Managers do multi-tasking. They go to a meeting, read mails while they are listening to what is said, and so on. Technical work is totally different. People are not computers. The thing managers don’t understand is that they can keep this multi-tasking up for hours and hours, while technical people can not. Managers don’t use that much of their brain. But real intense work does not work in the same manner. More than six and a half hours of intense technical work is not possible.
Pushing the Boundaries of Test Automation – Including Automating Heuristics
Julian Harty explained that one task he got from his boss is to automate all the usability testing. Heuristics are fallible rules or guidelines, Harty explained. Harty explained that he took a look onto dynamic usability testing tools available. There were tools which could tell him where some alt tags on a static page were missing. While this might be useful, for a blind person hitting 400 times the tab key in order to browse to an element that is interesting to him, is cumbersome, and no one would do this. He referred to how to fight layout bugs using a CSS hacks to make everything turn black, and then finding bugs in layouts based upon this.
Harty referred to the Automation Bias to the paper 5 orders of ignorance, stating what the different levels of ignorance are.
Stuart Reid challenged whether or not in the future there will be testing professionals. When Stuart left university he had the view that there was a normal distribution of their capabilities. This had been his naive view of the “profession” of testing. He cited the SEI that the actual distribution is a skewed distribution of organizational capabilities. Reid challenged this view, that the view is more skewed. Testers are probably less capable compared to programmers when it comes to software development.
Reid said that the people in the bottom 20% of the curve should not be called professional testers. He said that he supposes himself to be in the top 20% of the curve, and works with project managers, programmers, and business analysts. Reid said he don’t want to be in the same class as the lower 20% of the curve. So, he wants to define what professional means in first place. He referred to the educational background. 75% of the people have degrees. He cited Tom DeMarco that ten years ago most testers ever haven’t read a book on testing.
Reid pointed out to the outcome of unprofessional testing. Using ad hoc testing, testers find few faults. That makes them popular with the developers, because they think their code is great. Also the project managers get happy, because the project is not held up. Test and delivery costs are low, so is rework before the delivery. As a profession we need to get our house in order, Reid claimed, calling for making testing a true profession.
It’s important for me to point that I don’t agree to this point.
Test cases… is it quality or quantity?
Lloyd Roden welcomed the audience to the tester’s weakest link. He showed some testers. First there was John, with the company for 10 years, and a tester for five years. Carol is new to the company since one month, and new to testing coming from a model career. Rick is a senior tester for 25 years. Last there is Pam.
John is asked how many test cases according to his definition he has run in the last hour of testing. John says 30. Carol has a different definition of a test case than John, and is asked the same question. According to Carol’s definition as “input values, execution, pre-conditions, post-conditions” Carol has run two test cases. Rick has run 300 test cases. Finally, Pam’s definition is different than the others, and she claims she has run zero test cases. Therefore Pam is subject to get fired as the weakest link.
Roden left the audience with the thought that counting test cases is meaningless if we don’t know the background. Challenging the need to know the context Roden explained that we need to know more about the test cases in first cases. Regarding the test case counting fallacy, Roden said that Pam had failed to install the product in first place. That is why she couldn’t run any test within the last hour. Rick on the other hand clicked 300 times on buttons.
Roden explained that the analogy of a case is a good one, but there are several cases in the real world. Having a baggage case, with a smaller case in it, down to the case of his USB stick, Roden claimed that we need to know what a test case is, before trying to make meaning from the things we count.
Roden explained how we measure the quality of test cases. In gaining confidence in what has been tester, the depth of testing, and in how much has been tested. Finding and removing defect effectively, efficiently and reducing the risk to ship the product. Testers should provide the information regarding their test cases. He finished with a warning to be careful of the quantity of the cases without referring to the quality.