My Renaming attempt

After a discussion on the Agile-Testing mailing list, I decided to give up the proposal to rename the Agile School of Testing. Erik Petersen put it in such a good way, that I fully agree with, so I decided to quote him here:

The schools as defined by Brett in soundbites are:

Analytic School
sees testing as rigorous and technical with many proponents in academia

Standard School
sees testing as a way to measure progress with emphasis on cost
and repeatable standards

Quality School
emphasizes process, policing developers and acting as the gatekeeper

Context-Driven School
emphasizes people, seeking bugs that stakeholders care about

Agile School
uses testing to prove that development is complete; emphasizes automated
testing

I see no evidence in those descriptions that the Agile school has a monopoly on examples. All of these schools choose examples to demonstrate that a system appears to deliver their interpretation of functionality at a point in time, with differing degrees of attention to context and risk. I believe the Schools idea was originally intended to describe groups who tended to favor their ideas (dogma?) over others and focused mainly on functional testing, and when the original Agile School was named, it claimed to be replacing all the other schools. This has since changed considerably, and with new techniques such as Mocking and Dependency Injection and a focus on refactoring (CRAP metric anyone?) I would argue that Agile is much more about design and development aimed at simplicity (YAGNI), of which automated testing is only a part, rather than a specific School of (functional) testing. As I have said before, schools tend to manifest themselves in organizational culture and IMHO are relevant for discussion purposes only. Testing can involve many ideas, some of which are typically associated with schools, and depending on context and risks, testing can draw from all of them. My 2 cents.
Part of the problem is what the earliest schools have become, not what they started as. The original articles on waterfall in the early 1970s stated that just going from dev to test in one step never worked and needed to iterate between the two, but that got lost. In the mid 70s, Glenford Myers in amongst all his “axioms of testing” said that tests need to be written before being executed (because computers were million dollar machines and time was money, no longer such an issue) but he also said stop and replan after some execution to focus on bug clusters, and that also got lost. We need to be open to new ideas and weigh them against our current ones, based on their value and not the perceived baggage they bring from a particular school . Enough of the examples! [grin]
So in a sentence, I agree with Lisa’s posts and Markus’s later post about combinations of techniques (quote “Thinking in black and white when thinking of the schools of testing leads to misconception”), but Markus please ditch the Agile school rename attempt!
cheers,
Erik

Beside the remaining very good comments on this topic on the list, the idea of Schools in Testing simply does not care Agile Testers. Here is an excerpt from the discussion I had with James Bach on the topic:

When we speak of schools in my community, we are speaking of paradigms
in the Kuhnian sense of worldviews that cannot be reconciled with each
other.

Basically context-driven thinking helps Agile testers as well, but they don’t adapt a Kuhnian sense of worldviews towards testing. Mainly I am considering whether there is such a thing as a Agile School or not. Bret Pettichord felt like there was, but currently I am not convinced about it. I am glad that I learned a big lesson from great contributors on my renaming approach, and I finally ditch this attempt.

TDDing, Pairing and overcoming common prejudices

Just read a wise posting from Mike Hill on Pairing and TDDing. Here are two points I want to cite. I hope I will raise your attention for more. If so, read the full article, it’s worth your time.

You can not go faster by shorting internal quality unless you are a noob.

Taking a step back and improving the internal quality is the best decision you can make – and you can make this decision tomorrow, today, now.

The hard part of programming is the thinking…, and pairing and TDDing both improve it.

This is true. During February we had a project struggling for about half a year. With one and a half tester we were able to automate the open-ended tests in just two weeks. On a previous customer one year earlier we had struggled with it for nearly one year. This came from the kickstart in the thinking before starting test automation – which I consider software development in my context.

Apprenticeship

Enrique Comba Riepenhausen shared a video from Dave Hoover on his apprenticeship years in the software industry and on becoming a journeyman. While watching the video I came to the understanding that reading books, blogs and participating in online mailling lists is the track Dave Hoover also started off. This made me confident to be on the right track.

My next step as an apprentice will be the Agile Testing Days conference in Berlin in October. I submitted a proposal of my past year experiences of Agile practices and principles in a more traditional environment. My proposal got accepted and I’m curious about the session. Though my session will be scheduled nearly at the end of the conference, it will be worthwhile to join. Additionally I hope to meet the people I just know from E-Mail and chat by that time.

Example-driven School of Testing – Updates II

Yesterday I decided to bring more light into the darkness of my renaming proposal of the Agile Testing school. Therefore I raised my points on the Agile Testing mailing list. There were some valueable feedbacks and I identified the next points where clarification is necessary. As a side effect I started doubting the phrase “School” to do any good to the discussion.

First things first. One reply made me aware of the left-open question on Bret Pettichord’s foils: Do I have to pick a school? A similar discussion arose while compiling together the Software Craftsman’s Ethic. At one point we agreed that the ethics we wrote together so far should be viewed as one school of Software Craftsmanship. While reflecting on the term “school”, we came to the point where we tried to replace the term with guilde and the like. Answering Bret’s question, whether or not to pick a school, I refuse to do so. The Craftsmanship Manifesto teaches me one thing: The more practices you know and learn and maintain, the better prepared you are for your job at hand. This is the essence of the second statement of the manifesto and the “learn” portion of the ethics:

We consider it our responsibility 
  to hone our craft in pursuit of mastery;
    therefore, we 
      continuously explore new technologies and
      read and study the work of other craftsmen.

This means that I do not have to pick one school. By knowing how to combine together valueable lessons from each of the schools, I have a more powerful set of tools at hand. While replying to James Bach on this I realised that the combination of the Context-driven School and the Example-driven School is quite valueable. Gerald M. Weinberg wrote about this several decades ago:

…putting together two ideas to form a new one that’s better than either of it’s parents…

Copulation is a key to innovation; the opposite of it that gets into the way of creative new ideas is the No-Problem Syndrome. Read on in Becoming a Technical Leader.

My closing note: Thinking in schools seems to tend to thinking in drawers. Refuse to stop thinking while using the schools approach as a tool for reducing the complexity in order to get a suitable model that your brain can understand. You shouldn’t forget that humans are more complex than your mind might handle, though.

Example-driven School of Testing – Updates

After having a discussion on the Example-driven School of Testing with Michael Bolton, I realised that I missed some points. Being a human I truly believe that I am allowed to miss a point once in a while, the critical point is to realise this for yourself.

The first point Michael mentioned was the idea that whenever the acceptance tests pass, then we – the project team – are done. Indeed acceptance tests tell the project team that they’re not done when they don’t pass. This is a something different – check the Wason selection task for a explanation. (Just now I realise that I have given false kudos to James Bach for this quote. Sorry, Michael.) My reply was to view acceptance tests as a goal in terms of S.M.A.R.T.: Specific, Measurable, Attainable, Relevant and Time-bound. You can measure when you might have something to deliver, i.e. when you’re agreed examples from your specification workshops pass. You can measure when you might be ready for bringing the software to the customer and it’s specific and should be – of course – business-relevant. A friend of Michael Bolton put this this way:

When the acceptance tests pass, then you’re ready to give it to a real tester to kick the snot out of it.

This led my thought to a post from February this year from me: Testing Symbiosis. The main motivation behind this post was a discussion on the Software Testing mailing list about Agile and Context-driven testing. The plot outline of the story was Cem Kaner’s reply on that list on a article from Lisa Crispin and Janet Gregory leading to Janet leaving the list. Enough history.

The outline of Testing Symbiosis is that Exploratory Testing and Test Automation combine each other and rely on each other. Just automated testing can lead to problems on your user interface or on the user experiences of your product while just testing exploratory may lead to serious regression bugs. The wise combination of the two can lead to a well tested software product whose quality level is perceived high when shipped to your customer. Here is Michael Bolton’s summary after reading Testing symbiosis:

That said, I think that the role of the tester as an automator of high-level regression/acceptance tests has been oversold. I’m hearing more and more people agree with that. I think your approach is most sane: “On the other hand you can more easily automate software testing, if the software is built for automation support. This means low coupling of classes, high cohesion, easy to realize dependency injections and an entry point behind the GUI. Of course by then you will have to test the GUI manually, but you won’t need to exercise every test through the slow GUI with the complete database as backend. There are ways to speed yourself up.”

1) Automation support.
2) Entry points behind the GUI.
3) Human interaction with the GUI, more automation below it.

These two points seemed to need some clarifications. Feel free to remind me on the left-out corners that are still missing.

Example-driven School of Testing

Some years ago Bret Pettichord defined four schools of software testing: The Analytic School, the Standard School, the Quality School and the Context-Driven School. These ideas were incorporated into the book Lessons Learned in Software Testing: A Context-Driven Approach by James Bach, Cem Kaner and Bret Pettichord. Later on Bret included the Agile School of testing. Some days ago I realized that the name for the Agile School of testing is rather poor. This is the hypothesis and I would like to propose a new name based on the insights of the last few years for the thing Bret called the Agile school of testing. Read up on this in the extended body. Bret’s initial hypothesis was based on Agile Software development being mostly about test-driven development with massive usage of automated regression tests while the code is written. That’s why he included the following core beliefs of the Agile School of Testing:

  • Software is an ongoing conversation
  • Testing tells us that a development story is complete
  • Tests must be automated
  • Key Question: Is the story done?

Some time later Brian Marick wrote down what already was in the heads of many people: Tests in the Agile world are based on examples. Additionally Brian raises the point to renamed test-driven development to example-driven development, since this reflects testing in the agile context more appropriately.

A bunch of techniques ending in ‘dd’ for ‘-driven developenment’ appeared – mainly inspired by the Agile School that started with test-driven development. Among these are Acceptance Test-driven developement, behaviour driven-development or Domain Driven Design (yeah, right, this one does not end in ‘development’).

Back in February I was introduced to the Specification By Example by Gojko Adzic, who transferred the idea of Agile Acceptance Testing based on examples to a process of specification. As pointed out in one of his presentations on FIT and Agile Acceptance Testing, examples elaborate requirements or specifications. On the other hand examples can also become tests – and this is basically what the Agile School of Testing teaches us. Testing is based on noting down examples in order to test that you’re done with developing the feature in your current iteration.

Based on this I propose to rename the Agile School of Software Testing to the Example-driven School of Software Testing. On the other hand this would also make the statement clear that it’s not just about Agile, but rather about examples as Brian Marick initially pointed out. Another benefit of this term would be the distinction towards the Context-Driven School. I raise the point that we must truely understand that there is no either Example-based or Context-Driven, these two can be adapted together or one of them alone of neither of them. From my point of view these two school are able to co-exist and complete each other when applied together.