The essence of agile testing

After experiencing the concepts of test-driven-development during the last few weeks, I realised the essence of agile testing. After reviewing main parts of Janet Gregory’s and Lisa Crispin’s upcoming book on Agile Testing back in February I just now realised a common pattern in test-driven-development and agile testing: Tests – or like Brian Marick stated it: examples – drive the development process. You start with a first simple example. This might be a “the collection has just one element” during test-driven-development or “the account is activated afterwards” during a test scenario. From my point of view this can be thought as the essence of agile testing in either case.

Shu-Ha-Ri and psychology

Alistair Cockburn’s introductional chapter in Agile Software Development – The Cooperative Game reminded me today on something I learned before entering university. My fourth a-level during my german abitur was on education science. My teacher by that time (1996-1998) had a strong background in psychology and introduced Lawrence Kohlberg’s stages of moral development to us. Cockburn’s introductional comments on Shu-Ha-Ri reminded me on this when I came to read the chapter this morning.

Since I learned on Kohlberg’s model nearly 12 years ago without having any practical use of it, I had to look up the model. Cockburn had raised the question in me, if there were any parallels between Kohlberg’s model and the Shu-Ha-Ri educations. I started with this by searching out the notes we were given during school. Since I collected everthing that I wrote down during my abitur time, this was quite easy, though I did not remember the right semestry.

Kohlberg’s model denotes three main levels each with two sub-levels or stages. There is also a zeroth level and – as I got to know today – also an undocumented seventh stage. The three main levels with two stages each build the ground for his theory. Within the first (pre-conventional) level the student is influenced mainly by the teacher and the teacher’s opinion on right or wrong may be mapped to the student’s one. The second (conventional) level describes reasoning from the social point of view.

Persons who reason in a conventional way judge the morality of actions by comparing these actions to societal views and expectations.

Lawrence Kohlberg’s stages of moral development

The third (post-conventional) level goes beyond this. The argumentation follows princicples of ethics like Immanuel Kant’s categorical imperative.
Personally I was hoping to find parallels between these three main stages and the Shu-Ha-Ri concept.

Shu-Ha-Ri consists itself of three stages of communications and learning. The first stage describes the obeying phase.

The student should absorb all the teacher imparts, be eager to learn and willing to accept all correction and constructive criticism.

In the second stage:

The student’s individuality will begin to emerge in the way he or she performs techniques. At a deeper level, he or she will also break free of the rigid instruction of the teacher and begin to question and discover more through personal experience.

For the third stage:

Although the student is now fully independent, he treasures the wisdom and patient counsel of the teacher and there is a richness to their relationship that comes through their shared experiences. But the student is now learning and progressing more through self-discovery than by instruction and can give outlet to his or her own creative impulses.

The meaning of Shu-Ha-Ri

Though I hoped to find parallels between Kohlberg’s model and the Shu-Ha-Ri I came up disapointed that there is nothing. After reading more stuff on Kohlberg on Wikipedia it finished that Shu-Ha-Ri can be thought of as being more progressed than Kohlberg’s model. Especially the critisms on Kohlberg made me aware of this. The cases where Kohlberg did not think of are the cases where Shu-Ha-Ri steps in.

Furthermore I hope you disagree with this statement of mine and would like to share your viewpoint on this topic.

Testing and requirements gathering

Lately there is some heavy discussion on the Agile Testing mailing list which started with the question of the developer to tester ratio. Today Ron Jeffries noted a good point on a sub-discussion on the different skill sets of developers and testers:

Suppose we have some “requirement” and some test examples. Suppose that we are worried that the “requirement” is not met. What do we do? We produce another test example. While we are unconvinced that the “requirement” is met, we keep testing and coding. When we become convinced (or sufficiently confident), we stop testing and stop coding. Therefore the tests are the requirements.

Ron Jeffries

After thinking a while over Ron’s quote, I come to the conclusion, that testing – even in the agile context – is what it is: the translation of business interest into the development process. This short conclusion was difficult for me to express, so let me refine the previous statement.

Leaving the agile context out for a second testing shall provide the necessary feedback that the software under test has met the right degree of requirement fulfillment. Analytical testers use coverage metrics for this topic and an Agile Tester concludes with the customer representatives on the team which are the points to measure for requirement fulfillment. Like a Fourier transformation in maths you have discrete points on your function (software). The topic a tester has to fulfill to deliver good tested software is to identify the supporting points of the function (software). If the tests are build on uninteresting points within the function, the testing effort should have been saved. This may sound a little tough, but if you just check the error behaviour, you may and most likely will miss the business relevant tests on the happy path and therefore deliver software at a very high risk.

Worth to read

Bret Pettichord pointed me out to a column article by Andrew Binstock: Debunking Cyclomatic Complexity shows that cyclomatic code complexity does not correlate to the bug likelihood. Personally I found it quite worth reading since it shows how to be wrong with all the measurement activities you might think of.

James Shore made me remember the condition aspects which I first got to know during my time in school working for my Abitur. On educational science I was introduced to conditioing during my eleventh class if I remember correctly. In his article James shows perfectly how to be wrong on assumptions about rewards which I found quite worth reading with my background as a team leader.

Measurement of success

Today I was pointed out to three different measurements of success, that I applied in my mind on the current status of our project to migrate legacy tests to FIT/FitNesse. My weekly plan for the last week forsaw to have some kind of “prove of concept” for a very first test. We started to work on migrating our legacy tests from some shell-script approach towards more business-facing tests using FitNesse two weeks ago. We planned for three weeks, therefore we are currently more or less half way through the iteration. With some hard work I managed to get my first “proof of concept” by yesterday and felt successful about this. Anyways today I was pointed out to three different types of success measurement while reading The Art of Agile Development from James Shore and Shane Warden. Their definition made me thoughtful on the things I was happy about yesterday.

Basically we started with five people working on this project. First of all we wanted to concentrate on two areas of our product under test. For these two areas we decided to note down particular tests in FIT-style. Beneath this test definition where most of my testers were planned for, we wanted to build up a small framework in order to integrate our application and having more simple Fixtures to be created.

Until last Friday I wanted to get a first “proof of concept” for this whole work – a single first test, which hopefully should pass. Since there was just one of our external staff members working on that framework, it was not very stable and usable. Personally I decided to work on that proof of concept which I had set as goal for the week. The remaining two available testers in my group decided to start to maintain our legacy test suite, since there was a new release of the system upcoming and the framework was not usable, so we could not really include them into our work, yet.

By Thursday evening I had included a very first draft, which had the setup mechanisms hard-coded. The code was already triggered properly. On Friday I started to do some circumvention work on the framework my external staff member was working on in order to get our first test running and doing everything as intended by the table. I managed to exchange the hard-coded setup mechanism and introduced the result checks as intended. In the end everything showed up green and I was happy.

In The Art of Agile Development I got to know, that there are three kinds of success to think on: Personal Success, Technical Success and Organizational Success. The personal success I felt by yesterday evening is obvious. I have met my weekly goal, a first “proof of concept” is working. Technically I produced some circumventing solution, which works, but will be hard to maintain. From my point of view I have not been successful in this area when looking at the Fixture code I produced during the last week. Organisational we reduced the scope of our iteration, since we could not make that good progress as it would like to have.

Therefore I have a 1:2 situation and I’m not that happy on the success I have met during the last week. I hope I can smooth out to meet at least the technical success needs, which I would like to have by the next Friday, but unfortunately I’m currently not that convinced about that. Maybe it could help to get to know, what we have been doing wrong and how to react on this in the next iteration planning.

Exploratory Testing driven development vs. Exploratory test infection

Since Brian Marick pointed me in that direction, I started the investigation of Exploratory Testing driven development. It came to my mind, that I did not mean Exploratory Testing driven development, since Exploratory Testing is not driving the development. The more and more I thought about this the thing Brian pointed out seems to be Exploratory test infection rather than my first term. The term Exploratory Testing driven development is misleading, since the primary focus of the development should not be to have it properly set up for Exploratory Testing. Rather the value it delivers to the customer’s business is relevant. This might include easy information gathering during Exploratory Test sessions, but personally I think especially when performance is an issue the ability to do easy Exploratory Testing might conflict with intended business value.

Test infection describes the fact, that source code can be changed based on regression tests to assure no unintended changes in the behaviour as well as examples for the new changes – when considering test driven development. Therefore if you have a product which can be accessed easily within a Exploratory Test session for all the needed parameters a human tester might want to get to know, it should be called Exploratory test infected. Note that this definition goes beyond simply putting “everything” into some logfile. The human interaction within Exploratory Testing can go into directions which were not thought of during the development of the software.

The more I think over this I believe that there might not be a software which is fully Exploratory Testing infected. Due to this the engineer in me cries out for some measurement of the degree a software might be Exploratory Testing infected. During the next days I will think on possibilities for some measurement techniques for Exploratory Testing infection. Maybe I can then evaluate some popular software for this.

Exploratory-testing driven development

Yesterday Brian Marick made an blog entry with some thoughts considering alternatives to business-facing TDD. Here is an abstract:

Abstract: The value of programmer TDD is well established. It’s natural to extrapolate that practice to business-facing tests, hoping to obtain similar value. We’ve been banging away at that for years, and the results disappoint me. Perhaps it would be better to invest heavily in unprecedented amounts of built-in support for manual exploratory testing. An Alternative To Business-Facing TDD

It took me the whole night to realise, that I was faced with something similar. Personally I would like to call his idea Exploratory-testing driven development and I must admit that I was already confronted with this. Personally I would like to add some more context. In our organisation we have a huge product developed consisting of several software sub-systems written in C, C++ and Java. The Java sub-systems are build upon an JBoss application server and the key sub-systems, which is responsible for maintaining all the data from business point of view, is hard to test in the exploratory way, since it is hard to have the proper data in the system, you will need for more advanced tests. It’s possible, but hard, though.

During our last project-end overtime-schedule – yes, we’re doing waterfall… – I got the chance to get some more details on one C/C++ based sub-system, which is at the low-level end of the whole chain. All communication in the sub-system consists of xml-data. A collegue and I had a problem, so we contacted the main champion of that sub-system. We gave him the last snipplet of communication,
which came from the Java component. He took that xml-data, snipped a little here and there, putting a small xml-portion into a new root-container and – ta-da – he had all the data he needed to reproduce the behaviour in the sub-system he just had started.

This was the point where I realised what I called “exploratory testing driven development”. Within this particular sub-system it is easy to introduce a complete complex data-set with absolute ease (compared to the handbreaking things you have to do with the Java sub-systems).

An outline for me is, that I encourage support from development. Unfortunately though I think, that it is can be impossible to have complete exploratory testing support within an application and meeting performance and/or usability goals. Basically this depends mainly on the context of your application under test. But I will leave these thoughts like they are for the moment.

quakenet secure auth script for xchat

r0bert was so kind to package my python script which I use for secure challenge auth on the QuakeNet IRC-Chat. For the new authentication mechanisms for the new Q-Bot I had to do some adaptations which I shared with him. From my point of view there is nothing really special in this script, it’s just a good helper.

You may go and grab it on the link below. Personally I would be happy not to be bothered with too many bugs, since this guy was not really good tested. It just works “as is” for me. :)

quakenet_xchat_script.zip

Here is the according content from the README file:

This script provides automated authentification with Quakenet’s new (as of 22.03.2008) Q service for the free, graphical IRC client XChat.
Get it at http://www.xchat.org
QuakeNet’s website is http://www.quakenet.org

The Script makes use of the (new) challengeauth function, so the user’s
auth-password will only be submitted in encrypted form.
After having auth’ed with Q, user-modes will be set (e.g. +x) and
predefined channels will be joined.

Fill out the lines beginning with auth_nicks, auth_passes, channels and
modes at the beginning of the script and place it into your user’s
XChat directory, e.g. /home/username/.xchat2 – the script will be loaded
when XChat starts.

The script is released and licensed under the terms of the
GNU GPL Version 3.

Learn more about the GPL at http://www.gnu.org/licenses/gpl-3.0.html

Script-Author: ShiN0 –> http://www.shino.de/bog
Polishing and Propaganda: r0bert –> #konsolen@QuakeNet

If you like this thing, maybe I can give you additionally a hint to my sourceforge project on bayesian filtering for IRC. (I know that this needs an update.)

Agile projects and project management

During the last week I got a seminar organised by my employer. During one of the lunch breaks I found some time to talk to one of our project managers. He stated, that he basically likes the Scrum method. Unfortunately there is one big thing, why he does not like Scrum at all: They do not need project managers.

This statement initiated some research interest within me. Since I have not yet found the time to read through Agile Software Development with Scrum by Mike Beedle and Ken Schwaber – it’s on my order list at amazon since end of January – and don’t know that much about Scrum, I would welcome any articles worth reading on the need for project management within agile projects.

My personal intention is to find out more about agile development and how to establish it within our organisation. My first try is to do this by showing how agility works within a small test automation project. Lately I got a lot of inspiration by reading several newsgroups on agile related topics and additionally I took my last vacation to do a review of Lisa Crispin’s and Janet Gregory’s upcomig book on agile testing (I would order it right away if it was finished, yet). For the project management part I would also like to provide a solution, since our organisation is used to work in a matrix organisation and I very confident, that agile projects – even when they are very complex – need of course someone to check interdependencies and lead the project to success.

If you have something on your list, that you would like to me read to get a better inside, please add a comment to this article. I would be very pleased.

Software Testing, Craft, Leadership and beyond