The Deliberate Tester – Chapter 3: Fallacies and Pitfalls

Back in 2011, I approached Rob Lambert at the Software Testing Club on a small series, packed into a narrative format as I wanted to try that out. Rob decided to run that series on the Software Testing Club back then, and I had some fun writing it. Skip forward 11 years, and the Software Testing Club no longer exists, it’s been a while since I have been in touch with Rob, yet I figured, let’s see how this series aged over the years. As a sort of throwback Friday for myself, I will publish the entries on a weekly basis, and read along with you. I think I ended up with eight chapters in the end and might add a reflection overall at the end. In case you want to catch up with the previous parts, I published these ones earlier:

Chapter 3: Fallacies and Pitfalls

Peter hardly slept when turning in the next day. He couldn’t wait to see the results of their tests on the nightly build. He got directly to John’s office.
“Do you have the results from the nightly build? Did everything run well?”
“Good morning, Junior. How are you?”
“Fine, now, what about…”
“Patience. Patience. I just arrived. Right after getting myself a coffee, we can take a closer look into the results.”
“Great.”
The test results from the nightly build looked all pleasing. John showed Peter the progress of the last few builds in a graphical overview. The number of test cases was slowly crawling upwards. Everything looked interesting.

“Ah, here he is. Peter, may I show you something? If you mind, John?”
“Ah, go on, Jennifer. Show the kid the application. We made some pretty good progress here yesterday.”
“Yeah, I saw the results from the nightly build and the automation. Come on, Peter, let’s explore just that build.”
Peter was surprised. He wanted to stay with the automation since it released him from many of the tedious steps in manual testing.
“Go ahead, Junior. You may come back after that session.”

“Alright, let’s set the charter for the next ninety minutes.”
“I would like to explore some of the areas we automated yesterday.”
“Alright, so, what’s our mission?”
Jennifer and Peter brainstormed about the features John and Peter automated just the day before. Peter tried to bring in many areas he would like to inspect, but Jennifer told him that ninety minutes of testing are rather short. So, they set up a minimal charter that both could agree upon. Then they started to explore the application. Jennifer had some tools prepared to quickly bring in the test data they would need.

“What’s this?” Peter asked.
“Looks suspicious to me, too. I think we got a bug here.”
“Awww, but the automated tests passed this build, didn’t they?”
“Yeah, you saw the results yourself, didn’t you?”
“But, … does this mean that our automation was useless?”
“No, but automation alone is seldomly enough. It helps us get rid of most of the tedious test cycles, but we need to inspect and explore the problem with a sapient mind in front of it to be certain about it.”
“So, let me note this down so that John and I can write an automated test for it later.”
“Hold on. Let’s first dig deeper into the problem. What happens, if I do this?”
Jennifer clicked a button and entered a different text into the user interface on the screen. The application crashed after printing a large portion of text on the console from which it got started.
“Now, that is interesting.”
“Why? I don’t understand.”
“Looks like there is a problem with our user interface. Seems like the layout is screwed. Does not look like something that should be automated.”
“How come?”
“You see, we have a screen here with many elements. These elements have internal identifiers. There are now two of these elements overlayed by each other. In order to write an automated test for it, you would need to identify the position of these elements in x- and y-coordinates and compare both elements against each other. We don’t have a way to automate these with a reasonable enough effort. Let’s write it down on our issue list. That GUI needs to be re-aligned properly for the next release. We’ll address it during debriefing.”
“Ok, that means I should throw away my note for the automation?”
“Well, we may discuss this during debriefing together with Eric and maybe also John. That’s why we make a note of it and go on. We have to weigh in costs of automation and benefits from it here.”
“Ah, I understand your point.”

Eric already expected Jennifer and Peter.
“So, what did you find? We were planning to ship tomorrow. May we still make it?”
“Well, we got one crash from a misaligned user interface.”
“Oh, let’s get in touch with the user interface developers directly. Does not sound too complicated, though a crash is dramatic.”
“Yeah. We’re directly going over to them after the debrief.”
“Peter, what did you learn from this session?”
“Well, two things. First of all, test automation may be fast, but it is merely enough. We still need to explore the product for such problems with the alignment of the user interface, usability testing, etc.”
“Great. So what’s the second lesson?”
“Second, not every test can be automated.”
“Not every?”
“Well, maybe we can automate nearly every test, but not with a reasonable amount of effort put into it. Of course, we may exercise some of those automated tests in 10 years, maybe in 20 years, but who would like to wait for a product for that long period of time?”
“Great, Peter. You’re absolutely right. I see you making great progress, already.”
“Well, thank you. I think I got great teachers.”

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

Leave a Reply

Your email address will not be published.