Huib Schoots approached me late last year regarding some contribution for the TestNet Jubilee book he was putting together. The title of the book is “The Future of Software Testing”. I submitted something far too long, so that most of it fell for the copy-editing process. However, I wanted to share my original thoughts as well. So, here they are – disclaimer: they might be outdated right now.
The Testing Quadrants continue to be a source of confusion. I heard that Brian Marick was the first to write them down after long conversations with Cem Kaner. Lisa Crispin and Janet Gregory refined them in their book on Agile Testing.
I wrote earlier about my experiences with the testing quadrants in training situations a while back. One pattern that keeps on re-occurring when I run this particular exercise is that teams new to agile software development – especially the testers – don’t know which testing techniques to put in the quadrant with the business-facing tests that shall support the team. All there is, it seems, for them is critique to the product. This kept on confusing me, since I think us testers can bring great value. Recently I had an insight motivated by Elisabeth Hendrickson’s keynote at CAST 2012.
A while back I ranted about best practices. Among the things I found in that particular blog entry is that there are quite some definitions for the term “best practice” out there. Nowadays, if it’s not on google, then it doesn’t exist. For best practices google it turns out is quite capable of delivering a definition. Although I resonate with the principles of context-driven testing, I recently found the second principle unhelpful. The second principle states
There are good practices in context, but there are no best practices.
Like many other people that I respect I used to start ranting about best practices whenever people would ask for them. Particularly in training situations, though this does not help so much. J.B. Rainsberger‘s introduction to the Satir Communication Model helped me understand why that is.
This is an experience report that falls in many categories at the same time. I think the most remarkable one is the personal fail category (hooray, I learned something!).
As a consultant I do some fair amount of traveling. Most of the time I stay on the ground, though on my most recent trip to San Jose, CA for Test Coach Camp and CAST that was not an option. So, while lying jetlagged in the hotel room, I decided to blog about my trip here, and why I ended up testing a passenger airline system, which bugs I found, and which follow-up tests I can imagine to run from here.
At times I find quite interesting things in topics that I don’t seem too particularly interested in. A recent example – again – comes from Let’s Test in May 2012. While at the conference, I read through the program and thought that I don’t need to learn anything new from recent trends in bug reporting. Preferring to work on Agile projects, I don’t think I will use a bug tracker too much in the future.
On the other hand, I knew that I had signed up for BBST Bug Advocacy in June. So I kept wondering what I will learn there, and whether it will be as work intensive as Foundations was. I was amazed at some things. This blog entry deals with my biggest learning: for building blocks for follow-up testing – something I think a good tester needs to learn regardless of their particular background.
Over the course of the Let’s Test conference in Runö, Sweden, I noticed a problem with context-driven testing. In the past one or two months this turned into two problems I see with context-driven testing. I finally decided to put them out there for further discussion. I hope a lot of you don’t agree with me – and I hope a lot of you folks speak up.
This is a guest blog entry from Andrii Dzynia. He contacted me a while ago on Testing Dojos, and wanted to run a session in Kyiv, Ukraine on his own. At their first public Testing Dojo they seemed to have had a great time. You can find his original blog entry in Russian on his blog.