At the EuroSTAR conference Carsten Feilberg spoked about session-based test management in practice.
Carsten Feilberg introduced the papers form Bach brothers on session-based test management. He explained that he tried to introduce session-based test management, but at the company he was hired for, the managers didn’t know anything about. Feilberg showed that by re-arranging the terms, selling session-based test management as Managing Testing Based On Sessions works better for managers, since it involves the word “manage” first. Feilberg explained that session-based test management is about coverage.
Feilberg continued to show the elements of session-based test management. He showed overall missions, charters and sessions, and a debrief in the end.
Feilberg explained that the overall missions seem to be like an elephant. He divided a picture of an elements into chunks, and explained that these are the charters for our testing. Once testing starts, the perfect picture of our software gets some shape. Based on the feedback you get from actually tested charters we may have to change some things in our charters yet to come. He explained that charters might be added, or changed according to the overall mission.
Feilberg explained charters. He showed an example, the most important things consisting of a mission and the assigned time. These assigned times of uninterrupted testing are a guide. He explained that testing for more than three hours straight was impossible. Feilberg proposed timeframes for uninterrupted testing of 60, 90, and 120 minutes at most.
Challenged by Rob Sabbourin Feilberg explained that missions rarely fit into 90 minutes of uninterrupted testing. He said that in the beginning the mission is likely to be too broad. Over time the missions get more narrow. For missions Feilberg proposed to use the format “Test X because Y”. This focuses the testing activities to the underlying business goal.
Feilberg showed different types of sessions. Sessions can consist of discovery or touring the application. Sessions may consist of investigating bugs, and narrow down bugs in order to provide enough information to fix them. Another type for a test session is to retest earlier bug reports. There may also be test sessions specific to the particular test object.
During testing Feilberg explained testers crate a session log file. It should give answers to the question of Who, What, When, What I did and saw, Link to screen recording like screenshots or videos, Issues in the project as well as the process, and of cause bugs in the product. Any identifiers for data files also belong into the session log file. Feilberg mentioned that tools like Session Tester and Rapid Reporter help to keep these session notes.
Feilberg explained in the end that during the debrief any problems, and ideas are discussed between the manager and the tester according to the articles from a decade ago. Feilberg likes to put every tester in one room for the debrief. Every tester gets five minutes to share what happened during the last day. This helps to share knowledge, and generate new ideas about the big picture. This debrief format reminded me on daily stand-up meetings as they exist within Agile methodologies to align everyone to the common target. Feilberg continued to explain that he brings in plain charters to the debrief, since charters might change during the debrief just as for the picture of the elephant from earlier.
On metrics Feilberg explained that the debrief is about the testing time, bug reporting time, and setup time. AS a test manager he is interested in how many time can be spent actually testing, how many time is spent on setting up the product. Finally bug reporting time distracts the tester from actually testing. These three figures provide meaningful information. If the setup time is too long, then a new data input mechanism might help to improve testing overall. Maybe some tools for automating some process can help, too.
Feilberg discussed what happens in between sessions. Within session-based test management there will be time to discuss bugs, to clear up bugs, to visit the meetings. This time between sessions should be spend on the project, but not interacting too much with the product.
On the dangers of session-based test management, Feilberg explained it still can go wrong. There might be resistance to session-based test management. Especially setting aside time to define proper charters may be one problem found in this category. Another dysfunction is to break the time box of the uninterrupted time set aside for testing. Last, skipping the debriefing is a problem, as you loose the information form the test session, and therefore from the overall product, process, and project.