All posts by Markus Gärtner

People who influenced me

Over the course of creating the Ethics of Software Craftsmanship there was a statement included which tempted me to this blog entry for quite a while now. Here it is:

We can point to the people who influenced us and who we influenced.

At moment there are quite a bunch of craftspersons which influenced me. At the time I’m writing this I am working for three and a half years now in the Software business. Having finished university four years ago, I have gained a lot of experience from some great colleagues, which I never personally met, but which influenced me quite a lot. Today I decided to give you a list of these persons who might not know about their influence on me. I will divide the people into categories: Testing, Developing and Project Management. Personally I might also add Leadership in general, but due to my little experience in this field, I will leave this category out.

Continue reading People who influenced me

Succeeding with project methodologies

For my blog post Craftsmanship over Execution I had read about teams that were doing Agile, – i.e. Scrum – had practices – i.e. test-driven development. The point by that time concerned that another development team would hear “Oh, they succeeded with Scrum, let’s take Scrum and be successful, too!” There are several reasons why this is a bad idea and I decided to write about some of them.

Continue reading Succeeding with project methodologies

The tested checker

Michael Bolton brought up a series of blog entries on Testing vs. Checking recently with follow-ups here and here. He perfectly explains the differences between the mindful job of testing and the checking attitude. While both have a reason to be done, checking is too often confused with testing. From my experience this has a negative impact on how testing and testers are perceived by the organization around them. If people in the organization get told about testing and indeed have checking in mind, the lowered expectations about testing gives a negative impact on respect towards this mindful and thought-provoking activity.

Michael defines testing as a sapience based activity. You need the human to perform it. On the other hand checking is a pure act that anyone can do – even the computer. Based on my work experience, usually we think up new test cases, which we then start to automate and put into our revision control system and have them run on a regular basis. The former, the invention of the test case, is a testing activity. You need a human for this, since we are not capable of automating this step in our software development cycle. The latter, the mere execution of the automated test is a checking activity. Any computer can do this, and mostly we have this even done in our continuous integration server without required attendance of a human. If something goes wrong, well then we could have a problem and need a human to investigate in the test result and whether or not we have a problem with this outcome. This again is a testing activity.

This is not everything we do as testing activities, of course. There are documentation reviews, bug reporting, etc. that also needs to be done on our course. These also require a sapient to conclude. (I thought about automating that bug reporting in the past directly when an automated test case fails, but didn’t do it – so far.)

Matt Heusser noticed recently that there are few blog entries on testing recently. Personally I have cut my blog posts in the last two weeks down since I had to work on my submission for the Agile Testing Days conference in Berlin in October. There is bunch of blog entries I would like to point my readers to that I have piled up since then. In case you’re seeking for new testing blogs to read, Matt has compiled a list of worthy blogs in his entry on best new software test writing.

Finally I would like to point out to Joe Harters blog. Yesterday he made a great first post. Worthwhile to read and a great story on how to teach testing to your colleagues.

August of Testing

This blog posting will serve as a catch-all for the web-pages I left open on my tabs during the last few weeks. In case you’re interested in what I did since the beginning of August, it might be worth a read for you.

Back in July Mike Bria posted an article on Test objects, not methods. His lesson boils pretty much down to one single quote that I need to remind myself of:

Keep yourself focused on writing tests for the desired behavior of your objects, rather than for the methods they contain.

Over the past weekend Michael Bolton came up with a final definition on checking vs. testing. Lately I argued whether or not we need a new name for testing. Michael Bolton was able to explain to me in this post that we do not need a new name, but a proper definition. This definition in combination with Perfect Software gives you pretty well insights into what to expect from testing and what to expect from checking. Adam Goucher summarized a talk from Chris McMahon at the Agile 2009 conference with the following insight:

The only reason to automate is to give expert testers time to manually test.

Ivan Sanchez stated that there is another area where a term got lately abused with vague definitions. The term is “done” and he calls for stopping redefining and redefining it. Basically my experience shows that the same is pretty much true to a lot of words in our business. Lean, Test, Done, Agile, …. maybe you have more of these for the comments.

On my way home today I thought over a job profile I read over the past month. Basically when transporting it to my company I would include “You know the difference between testing and checking” in the skill list.

Finally Matt Heusser put up another test challenge. What I loved about it was the fact that I could more or less directly relate the lesson to my daily work. So it’s nothing really artificial if you build the bridge between the abstract challenge and your real-world project. Oh, and of course, for a geek like me it was fun. Wonder, if I could do the challenge in the real-world, once. I love lightsabres. And as a black-belt member of the Miagi-Do school I had to take that challenge, of course. Feel free to exchange some thoughts on the challenge with me if you have taken it.

Writing about testing

Chris McMahon is organizing currently a peer conference on Writing about Testing. Please spread the word, since I believe it to be a good way to hone our craft and spread the word to future generations of great testers. The intended speakers are writers in the testing fields. Blog writers, article publishers, etc., etc. Overall Chris seems to be well prepared for such a conference and I hope I can either make it to his one or get to organize one myself.

Inter-company collaboration

Over the past few weeks I realized a problem in one of the projects I’m currently involved in. Our customer is located in Brasil and has contracted an external company for the integration into the legacy software system culture and accteptance testing of the delivered solution. They maintained the legacy system for the last years and have very good business knowledge about it. This company has contracted us in order to deliver that system. So far, sounds great, doesn’t it?

Not that much. What has happened over the last one and a half year is the following. Our company just gets vague informations regarding requirements from the customer. We give lots of information, but they often do not make sense. The company that contracted us knows more about the legacy systems, but does not provide this information to the competitor – to us. Therefore it was a torture to get the system right after long chains of late change requests over the past month which basically ended up in rewriting everything we had. Sounds worse, now, doesn’t it?

But I’m not quite finished. What now unfolds is the need for my company to deliver the system in order to make money with it in this very year. The end-customer has some money he wants to spent for a new system and we are in long term relationships with them. In order to finish this off together with data conversion in this year, we need to deliver the system until midth of September latest – we thought until last Friday. Now the middleman company made the proposal to postpone production date for two weeks and asked us to deliver any priority one blocker bugs in three calendar days – while dealing with the pending change requests, too, of course. All together they have sucessfully exercised over the past few months five percent of the tests they planned. The others are blocked or got some errors. Curious, isn’t it?

It gets better. We have so far around ten bugs from the opened in our bug tracking system, five currently fixed of these, three fixed by tomorrow and the remaining two should be dealt with over this week. Now, the question today arose, how come we just get this few number of bugs back if just five percent of the tests were sucessful with about 75 percent being blocked? (An unfair question to ask, but let me continue.) A colleague today arrived back from on-site visit and he explained that the middleman company seems to open twenty bugs, if we fix ten. When we fix those twenty, they’ll probably open fifty new ones and so on. Sure, this is an absence of trust due to remote location of implementation teams, etc., etc. The point that strikes me, is that the end customer who will be paying both our companies in the end, is listening to them. Therefore we are asked to do massive overtime on the weekends, in the late nights (5 hour time difference is a tough working.), etc.

Oh, of course, we already tried the obvious: Working together with the other company. So far it did not work. We’re continuing to try, but with semi-success so far. When put aside their testers a technical expert for our system just gets asked questions about how to export a shell variable or how to use that key combination in vi (should use emacs from my pov). The striking point is that the other company does not realize the mutual benefit situation we should have. Now comes the interesting part. Today I was reminded on the negative aspects about metrics like “how many bugs do you find?” Some weeks ago the founder of the Miagi-Do school, Matthew Heusser, pointed myself out to a paper from Cem Kaner on metrics: Software Engineering metrics: What do they measure?. Today I was very, very surprised that in case of that other company, you don’t seem to need those metrics to do harm to a project. All you need to do is basically give that middleman company the feeling that they won’t be needed any longer in mid-term.

This reminded myself of some terms of some of the manifestos around:

… and certainly there are more of these. But what I noticed today is quite the opposite. Please share your comments with me.

Interview with Gerard Meszaros

A while ago Matt Heusser asked me to help him out with some interviews. Today InfomIT announced the interview with Gerard Meszaros that I helped with. It’s title is The Future of Agile Software Testing with xUnit And Beyond and I’m glad that I could be of some help there. If you haven’t read the book from Meszaros, make sure to order it. It’s not just covering unit testing if you’re reading between the lines.

XML Unit: assertEquals and fail

Some weeks ago I was challenged at work that led me to invent a unit test framework for xml files. After some basic first steps I decided to introduce the approach on my blog as well.

Motivation

Why did I need a unit test framework for xml files? Easy enough. Our product has an intense configuration consisting of several xml files for the customization part. On the current project I’m working on there is a configuration file which consists of about 18 Megabytes of customization in one xml file. Most of our other tests use to start up a full system (taking a bit more than 60 Megs of main-memory database in combination with oracle database persistence) and exercise the whole chain all through the system. Initially I tried to write a unit test using JUnit and JDom, but failed to have it executed in my IDE while the 18 Megs were trying to be loaded with an OutOfHeapSpace exception. Brian Marick had pointed out Assert { xpath } from Ruby some weeks ago and I started to consider this as an alternative. After realizing that nearly no one knows Ruby at my company and there would be drawbacks considering the way our company uses the Revision Control system, I forgot about these alternative.

Then I reminded myself on an approach using xsltproc and some stylesheets. Usability for our Continuous Integration framework was an issue, therefore I decided to produce an output similar to the JUnit XML reports, so these can be directly hard-wired. This blog entry will describe the first few functions that I built: assertEquals and fail. If there is a demand for follow-ups, I will try to come up with an approach similar to the Test and the Suite classes of JUnit. In the end a test runner will also be needed, which is currently handled by Makefile dependencies.

Continue reading XML Unit: assertEquals and fail