Category Archives: Testing

Software Testing

Responding to Change

During my stay in the US during our vacations this year I was able to collect some notes and reflect on some stuff in the real world and compare it with software development occassionally. Today I decided to do a write up on the “Responding to Change” value of the Agile Manifesto.

While finishing the book The Pragmatic Programmer I got confronted with some concepts that I also noticed in the real world. Early on in the Design Patterns movement I learned to decouple my code as far as possible to allow change to happen. Craig Larman and Robert Martin early on got me into this thought process.

Being a European there were quite a bunch of things different in the US as here at home. Here is a list of things I noticed and I was wondering how hard it would be to change them in our software system. Luckily most of the things I wasn’t able to find at all, whereas most major variation points like currencies or tax systems were also thought on. How does your software respond to change it to the US system? What about selling your software to a European customer? For the testers and checkers among you, would you test that your software supports these? Do you test your software supports these? Does your software support them? Here’s the list, make the test, build up your mind and maybe let me participate in your findings.

Currencies There are a bunch of currencies I was confronted with. Euros of course, Deutsche Mark in the past (does anyone know what a “Groschen” is?), US Dollar, Pennies, Dimes, Quarters, Cents (does your software support the ¢ sign?), Disney Dollars (you can actually pay with it in Walt Disney World ressorts or take them as a gift for your mates at home), …

Tax Maps In the US every single state has a different tax model, even every county can have it’s own. In Germany there are different taxes applied usually, the lower one for food, the higher one for the rest. In Brasil there are 27 states each with their own tax model. Tax laws seem to be the most complex stuff.

Distances Meters, millimeters, centimeters, feet, inches, miles, kilometers, bavarian ells, english ells.

Area sizes Square feet vs. square meters.

Volume Litre, cubicmeters, Gallons to name just some.

Temperatur How many degress Celsius is one degree Fahrenheit? Is 90 degrees Fahrenheit hotter than 30 degree Celsius? What about Kelvin?

Fuel prices How much is 2.59 USD per Gallon in Euros per litre?

Consumption Is a fuel consumption of 28 Miles per gallon the same as 5 litres per 100 kilometers?

Voltages 110V vs. 220V vs. 400V, AC vc. DC.

What did I forget? What hard to change facts do you have to deal with?

Lesson 12: Never be the gatekeeper!

Adam Goucher wrote today a blog entry on being the gatekeeper as a software tester. Adam writes that it is a bad idea to be the gatekeeper, but there are of course situations where you can’t refuse to play the game of being the gatekeeper in software testing.

The topic line is taken from Lessons Learned in Software Testing. James Bach, Cem Kaner and Bret Pettichord reason pretty well about this. In a private conversation with Michael Bolton we indentified that “ship it” or “don’t ship it” is a business decision and thus far I wasn’t able to make these sort of business decsisions due to lack of particular knowledge and influence about the project, the development methodology, etc. Maybe there is something more to come for me, but in my roughly three years of experience as a software tester, I never was in the position to make these decisions.

Particularly why Adam’s blog entry raised my attention is that I was faced a gatekeeping situation lately at work, and I refused to be the gatekeeper there – for good. Over the past few months we had a big deal to work with in order to get the project off the shelf which we worked on over the complete year. Of course, there is a customer, which has stated intentions to give some of his money to us, if we finish this off properly. We are delivering our software to another company which is then installing it for the customer. None of the three companies – the customer, the company we’re delivering to and ourself – seems to have a proper picture of the business logic of the customer. All we get is data dumps from the legacy system which we then convert and adapt for our product.

Over the project course I was faced with the situation that the data dumps we got were incomplete. We were realizing a change request and needed that data dump, the changes should be delivered by the next business day. The software was going to get shipped and right after sending it to our colleagues we realized there were severe gaps in the data we got. So, what to do about it?

  • Be the gatekeeper, disapprove the delivery and wait one week for the next data dump.
  • Raise the point to the project manager and let him make the decision about this business issue.

Clearly the first option was never considered by myself, so I stuck to the second option for good. This case should make the point clear why gatekeeping as a testing is no good option to give in. If I had made the decision to not install the change request which was pending for over one or two months, the business would have been blocked, we would have gotten the full blame. On the other I would have gotten the blame when we delivered the software and I let it go through in the gatekeeper role and it did not work properly. On a side note, the project is spread between Germany and Brazil with a pretty five hour timezone delay between the two locations, which makes communication hard.

How would you have reacted? Maybe there is an option I oversaw here or did not consider since it seemed too stupid in first place.

Succeeding with project methodologies

For my blog post Craftsmanship over Execution I had read about teams that were doing Agile, – i.e. Scrum – had practices – i.e. test-driven development. The point by that time concerned that another development team would hear “Oh, they succeeded with Scrum, let’s take Scrum and be successful, too!” There are several reasons why this is a bad idea and I decided to write about some of them.

Continue reading Succeeding with project methodologies

The tested checker

Michael Bolton brought up a series of blog entries on Testing vs. Checking recently with follow-ups here and here. He perfectly explains the differences between the mindful job of testing and the checking attitude. While both have a reason to be done, checking is too often confused with testing. From my experience this has a negative impact on how testing and testers are perceived by the organization around them. If people in the organization get told about testing and indeed have checking in mind, the lowered expectations about testing gives a negative impact on respect towards this mindful and thought-provoking activity.

Michael defines testing as a sapience based activity. You need the human to perform it. On the other hand checking is a pure act that anyone can do – even the computer. Based on my work experience, usually we think up new test cases, which we then start to automate and put into our revision control system and have them run on a regular basis. The former, the invention of the test case, is a testing activity. You need a human for this, since we are not capable of automating this step in our software development cycle. The latter, the mere execution of the automated test is a checking activity. Any computer can do this, and mostly we have this even done in our continuous integration server without required attendance of a human. If something goes wrong, well then we could have a problem and need a human to investigate in the test result and whether or not we have a problem with this outcome. This again is a testing activity.

This is not everything we do as testing activities, of course. There are documentation reviews, bug reporting, etc. that also needs to be done on our course. These also require a sapient to conclude. (I thought about automating that bug reporting in the past directly when an automated test case fails, but didn’t do it – so far.)

Matt Heusser noticed recently that there are few blog entries on testing recently. Personally I have cut my blog posts in the last two weeks down since I had to work on my submission for the Agile Testing Days conference in Berlin in October. There is bunch of blog entries I would like to point my readers to that I have piled up since then. In case you’re seeking for new testing blogs to read, Matt has compiled a list of worthy blogs in his entry on best new software test writing.

Finally I would like to point out to Joe Harters blog. Yesterday he made a great first post. Worthwhile to read and a great story on how to teach testing to your colleagues.

August of Testing

This blog posting will serve as a catch-all for the web-pages I left open on my tabs during the last few weeks. In case you’re interested in what I did since the beginning of August, it might be worth a read for you.

Back in July Mike Bria posted an article on Test objects, not methods. His lesson boils pretty much down to one single quote that I need to remind myself of:

Keep yourself focused on writing tests for the desired behavior of your objects, rather than for the methods they contain.

Over the past weekend Michael Bolton came up with a final definition on checking vs. testing. Lately I argued whether or not we need a new name for testing. Michael Bolton was able to explain to me in this post that we do not need a new name, but a proper definition. This definition in combination with Perfect Software gives you pretty well insights into what to expect from testing and what to expect from checking. Adam Goucher summarized a talk from Chris McMahon at the Agile 2009 conference with the following insight:

The only reason to automate is to give expert testers time to manually test.

Ivan Sanchez stated that there is another area where a term got lately abused with vague definitions. The term is “done” and he calls for stopping redefining and redefining it. Basically my experience shows that the same is pretty much true to a lot of words in our business. Lean, Test, Done, Agile, …. maybe you have more of these for the comments.

On my way home today I thought over a job profile I read over the past month. Basically when transporting it to my company I would include “You know the difference between testing and checking” in the skill list.

Finally Matt Heusser put up another test challenge. What I loved about it was the fact that I could more or less directly relate the lesson to my daily work. So it’s nothing really artificial if you build the bridge between the abstract challenge and your real-world project. Oh, and of course, for a geek like me it was fun. Wonder, if I could do the challenge in the real-world, once. I love lightsabres. And as a black-belt member of the Miagi-Do school I had to take that challenge, of course. Feel free to exchange some thoughts on the challenge with me if you have taken it.

Inter-company collaboration

Over the past few weeks I realized a problem in one of the projects I’m currently involved in. Our customer is located in Brasil and has contracted an external company for the integration into the legacy software system culture and accteptance testing of the delivered solution. They maintained the legacy system for the last years and have very good business knowledge about it. This company has contracted us in order to deliver that system. So far, sounds great, doesn’t it?

Not that much. What has happened over the last one and a half year is the following. Our company just gets vague informations regarding requirements from the customer. We give lots of information, but they often do not make sense. The company that contracted us knows more about the legacy systems, but does not provide this information to the competitor – to us. Therefore it was a torture to get the system right after long chains of late change requests over the past month which basically ended up in rewriting everything we had. Sounds worse, now, doesn’t it?

But I’m not quite finished. What now unfolds is the need for my company to deliver the system in order to make money with it in this very year. The end-customer has some money he wants to spent for a new system and we are in long term relationships with them. In order to finish this off together with data conversion in this year, we need to deliver the system until midth of September latest – we thought until last Friday. Now the middleman company made the proposal to postpone production date for two weeks and asked us to deliver any priority one blocker bugs in three calendar days – while dealing with the pending change requests, too, of course. All together they have sucessfully exercised over the past few months five percent of the tests they planned. The others are blocked or got some errors. Curious, isn’t it?

It gets better. We have so far around ten bugs from the opened in our bug tracking system, five currently fixed of these, three fixed by tomorrow and the remaining two should be dealt with over this week. Now, the question today arose, how come we just get this few number of bugs back if just five percent of the tests were sucessful with about 75 percent being blocked? (An unfair question to ask, but let me continue.) A colleague today arrived back from on-site visit and he explained that the middleman company seems to open twenty bugs, if we fix ten. When we fix those twenty, they’ll probably open fifty new ones and so on. Sure, this is an absence of trust due to remote location of implementation teams, etc., etc. The point that strikes me, is that the end customer who will be paying both our companies in the end, is listening to them. Therefore we are asked to do massive overtime on the weekends, in the late nights (5 hour time difference is a tough working.), etc.

Oh, of course, we already tried the obvious: Working together with the other company. So far it did not work. We’re continuing to try, but with semi-success so far. When put aside their testers a technical expert for our system just gets asked questions about how to export a shell variable or how to use that key combination in vi (should use emacs from my pov). The striking point is that the other company does not realize the mutual benefit situation we should have. Now comes the interesting part. Today I was reminded on the negative aspects about metrics like “how many bugs do you find?” Some weeks ago the founder of the Miagi-Do school, Matthew Heusser, pointed myself out to a paper from Cem Kaner on metrics: Software Engineering metrics: What do they measure?. Today I was very, very surprised that in case of that other company, you don’t seem to need those metrics to do harm to a project. All you need to do is basically give that middleman company the feeling that they won’t be needed any longer in mid-term.

This reminded myself of some terms of some of the manifestos around:

… and certainly there are more of these. But what I noticed today is quite the opposite. Please share your comments with me.

Interview with Gerard Meszaros

A while ago Matt Heusser asked me to help him out with some interviews. Today InfomIT announced the interview with Gerard Meszaros that I helped with. It’s title is The Future of Agile Software Testing with xUnit And Beyond and I’m glad that I could be of some help there. If you haven’t read the book from Meszaros, make sure to order it. It’s not just covering unit testing if you’re reading between the lines.

XML Unit: assertEquals and fail

Some weeks ago I was challenged at work that led me to invent a unit test framework for xml files. After some basic first steps I decided to introduce the approach on my blog as well.

Motivation

Why did I need a unit test framework for xml files? Easy enough. Our product has an intense configuration consisting of several xml files for the customization part. On the current project I’m working on there is a configuration file which consists of about 18 Megabytes of customization in one xml file. Most of our other tests use to start up a full system (taking a bit more than 60 Megs of main-memory database in combination with oracle database persistence) and exercise the whole chain all through the system. Initially I tried to write a unit test using JUnit and JDom, but failed to have it executed in my IDE while the 18 Megs were trying to be loaded with an OutOfHeapSpace exception. Brian Marick had pointed out Assert { xpath } from Ruby some weeks ago and I started to consider this as an alternative. After realizing that nearly no one knows Ruby at my company and there would be drawbacks considering the way our company uses the Revision Control system, I forgot about these alternative.

Then I reminded myself on an approach using xsltproc and some stylesheets. Usability for our Continuous Integration framework was an issue, therefore I decided to produce an output similar to the JUnit XML reports, so these can be directly hard-wired. This blog entry will describe the first few functions that I built: assertEquals and fail. If there is a demand for follow-ups, I will try to come up with an approach similar to the Test and the Suite classes of JUnit. In the end a test runner will also be needed, which is currently handled by Makefile dependencies.

Continue reading XML Unit: assertEquals and fail

Customer relations to beautiful code

What is Quality?

Quality is value to some person.

I would like to show how to apply this often-cited quote from Jerry Weinberg in order to distinguish whether or not to code beautifully (or elegantly) or build a mess. In order to start I will distinguish between externally perceived quality of your software and internal structural quality of the code.

The externally quality of your software is the factor how the end user perceives your software. Bulky user interfaces with complicated workflows can be valueable to someone, most prefer easy to use user interfaces with easy to learn or even intuitive workflows. Therefore your end user will not care much about how beautifully you coded your application using test-driven development or not, using latest design patterns or not, refactoring as you went to ship it. Isn’t it?

No, it is not. Here again it depends on the context of your software and it’s use and the situations your users want to use it. If the business model has a high coupling to laws like tax systems and these laws changes, it actually may be the case that a product of high quality today becomes a mess tomorrow, since I have to recalculate the taxes myself using a calculator. Since quality is timely value to some person quality may disappear if it is not maintained.

Here comes internal quality into play. If you have built a mess with your code, you make yourself unable to react to market or law situations. Ward Cunningham, Martin Fowler and others called this concept Technical Debt. Sure, it might be that you can overcome this situation with another mess, thereby disabling you for the next adaptation you are going to make. In the end you are running into the situation where your code base can no longer with todays needs, since you brought in flaws in yesterday urgency.

Alistair Cockburn came up with a two-phased approach to describe this situation: The Cooperative Game. The initial goal of the game is to deliver working software. This related to the external quality of your software as it is perceived by the customers today. The second goal of a cooperative game in software development is to adapt the system to tomorrows needs. This related highly to the internal structural quality of your code base. No end consumer will take care of this – today – but he will be unhappy to pay for your Technical Debt in terms of later delivery, higher costs tomorrow.