Over the weekend I introduced into ParkCalc automation. Today, we will take a closer look on the third test in the provided test examples, and see how we can improve it. Before I do this, I will point you to two great articles from Dale Emery. The first is tenish pages piece where he walks through a login screen. Uncle Bob showed the same example using FitNesse with Slim. In the second he describes a layered approach to software test automation in a very well manner. Together with Gojko’s anatomy of a good acceptance test this gives us some picture where we should be heading.
Continue reading ParkCalc automation – Refactoring a data-driven testCategory Archives: Agile Testing
Testing inside agile development cycles
ParkCalc automation – Getting started
This week Gojko Adzic wrote about the anatomy of a good acceptance test. After having read his elaboration, I remembered how I came up with the preparation for the EWT19 session some weeks ago. We used RobotFramework to automate tests for the Parking Lot Calculator that we asked Weekend Testing participants a few weeks earlier with manual Exploratory Testing. To get testers started we provided them with three examples that I prepared before the session. We then asked testers to automate their tests for the ParkCalc website based on one of the examples we provided. Here is my write-up how I came up with the examples, and what I had in mind.
XP2010: Testing Dojos
This is my presentation from the Testing Dojos workshop this afternoon, which didn’t take place due to lack of participants. I may run this at a conference in the future, preferably a testing conference, let’s see. I got a write-up here on my blog. You may read more about it here.
Shift
Matt Heusser wrote about the question Are testers going away? in a blog entry yesterday. As I started to write a comment on his blog, I noticed that I should selfishly make an own blog entry out of it. So, in case you haven’t read Matt’s entry, go there and read first, maybe.
Continue reading ShiftSoftware Management 0.5
Just this morning, it appeared to me, and there the solution was to all our problems. It had been there, so directly in front of my face, that I haven’t seen it, but it is so clear now.
We need to separate testing from the act of programming.
Wow. That’s a statement. But I’m serious. It has never worked and the large amounts of failing teams with Scram or Krumban, or whatever they call it, got it wrong. Yeah, Agile got it wrong. Collaboration is for the weak. After having spent over fifteen years with this crap, we need to get our bricks for the silos out from the closet again, and build up walls between those teams. Give testers different offices, on different floors, in different buildings, heck, what am I saying, give them different planets to be on, so they communicate mainly over the bug tracking system.
And I want to see a test plan, with every written detailed test up-front. Now. Show me. And I want to see Gantt chart based progress report, every week, and every day or even twice a day if there is an escalation. And I don’t want to spent time on re-planning. The initial plan must hold. Test design documents are fixed after being created initially. Yeah, that’s what we’re going to do.
While we’re at it, is there a way to get that waterfall model more heavy? How much celebration may we add to it? Just that? Come on, give me more than just the usual bug metrics and that crap. I want three additional testing cycles in the end. And I want QA to approve every delivery we make. Sure, they have to sign it. On paper, three copies to each major division head. Exactly.
If you haven’t figured yet, this is a rant about a personal Black Swan that a friend of mine just told me about his replaced management. Exactly, they’re separating the collaboration between programmers and testers, dividing them, right now, in this century, nearly ten years after the Agile manifesto and it’s focus on team-values, and cross-functional teams. This manager made the experience that developers and testers agree upon a delivery in a dysfunctional way. Therefore he wants to separate them again. And every major change project is failed if it is not implemented after 90 days. I’m not that good as a clairvoyance, so, what would you suggest my friend to do next?
The Deliberate Tester
Some while ago I started to write a tester’s novell. Heavily inspired by Elisabeth Hendrickson‘s article in the January 2010 issue of the Software Test and Performance magazine. Personally, I wanted to try myself on a tale of a tester after having read her article, and got in touch with Elisabeth. We exchanged some thoughts, some words, and voila, there was it.
I hoped it was that easy. Basically, what I wanted to tell, is the story about a new tester in a larger corporation. Based on my personal experience four years ago, I realized there was little previous knowledge that I could have taken with me for the job. Between the job interview and starting as a tester my father had died suffering from cancer for a little more than a decade. One day after the job interview I brought him to hospital. During family affairs I got the job position offering call. Having got this hard set-back, anyways, I started to dig into the field and learned. First by getting experience from colleagues, later by reading the classics and reflecting constantly on the practices I followed.
After having read a blog entry from Anne-Marie Charrett and one from Rob Lambert earlier, I decided to tell a tale about a tester getting introduced into our work. Alongside I want to spread the word on what we’re acutally doing, though there may be just few outside the testing field who will actually care. Therefore, I realized I needed to write it as an authentic, still fictive story, just like “The Craftsman” from Robert C. Martin.
So, I realized that I might split my work up into pieces. I decided to come up with three pieces so far from the original article. There is more room for further articles in the same manner. I’m still working on the story-line, so I would be glad to get some feedback on future episodes from the readers, which I might incorporate. The Software Testing Club put the first episode of the Deliberate Tester up on their blog. You can read it here, it’s called Session based exploration.
Writing automated business-facing tests
Since I work in a more traditional orientated environment, I’m facing some questions regarding the usage of test frameworks such as FitNesse, Robot Framework or Concordion. The biggest problem I hear very often is to start implementing fixture code before any test data in terms of test tables or html pages is available. Directly the scene from J.B. Rainsberger’s Integration tests are a scam talk comes to mind where he slaps the bad programmer’s hand. Bad! Bad, bad, bad! Never, ever do this. So, here is a more elaborate explanation, which I hope I may use to reference to my colleagues.
So, the first thing is to pick up the classics on the topic and check the documentation about it. So, let’s start with the classic FIT for Developing Software. So, the structure of the book is separated into a first part mentioning the table layouts, in the second part goes into detail about the classes for the implementation. So, Ward Cunningham and Rick Mugridge seem to follow this pattern. Great. Next reference, Bridging the Communication Gap. Gojko introduces there specifications workshops and specification by example. Both are based on defining the test data first, and later on automate them. This helps building up the ubiquitous language on the project at hand.
But there is more to it. Since test automation is software development, let me pick an example from the world of software development. Over the years, Big design Up-front has become an anti-pattern in software development. Though there are some pros to it, on the con-side there are that I may try to think about each and every case which I might need for my test data, but I may be wrong about that. So, just in case you are not from Nostradamus’ family, thinking about your design too much up-front my lead to over-design. This is why Agile software development emphasizes emergent designs and the simplest thing that could possibly work. Say, if I work now on ten classes, which I completely do not need when the test data is noted down, then I have spent precious time on building these – probably even without executing them. When later on the need for twenty additional classes arises, the time spent on those initial useless ten classes cannot be recovered to cope up. Additionally these ten classes may now make my suffer from Technical Debt, since I need to maintain them – just in case I may need them later. Maybe the time spent initially on the ten useless classes would have been better spent on getting down the business cases properly in first place – for those who wonder why your pants are always on fire.
Last, if I retrofit my test data to the available functions in the code, I have to put unnecessary detail into my tests. The FIT book as well as the Concordion hints page lists this as a malpractice or smell. For example, if I need an account for my test case and I am retrofitting it to a full-blown function call which takes a comma-separated list of products to be associated with the account, a billing day, a comma-separated list of optional product features and a language identifier as parameters, I would write something like this:
create account | myAccount | product1,product2,product3 | 5 | feature1,feature2,feature3 | EN |
When I can apply wishful thinking to my test data, I would be able to write it down as brief as possible. So, if I don’t need to care about the particular products and features sold, I may as well boil the above table down to this:
create account | myAccount |
In addition to this simplification think about the implications a change for create account in the example above would have, when I need to a add a new parameter for example a the last billed amount for that account. If I came up with six-hundred test tables by the time of introduction of this additional feature, I would have to change all of those six-hundred tests. This time for changing these six-hundred tests will not be available to test the application. Wasted – by my own fault earlier!
In the end, it boils down to this little sentence I used to describe this blog entry briefly on twitter:
When writing automated business-facing tests, start with the test data (the what), not the fixture to automate it (the how). ALWAYS!
“So, what should I do tomorrow?”
Far too long I have skipped this write-up. The motivation for this entry comes mainly from Anne-Marie Charrett who hada dream about software testing. Rob Lambert then reminded me on this by stating that I shouldn’t judge people too quickly. So, what is this all about?
Reflecting over my personal education and development background, I got from university four years ago into software testing. Never having heard of anything about it, never actually confronted with test-driven development, knowing nothing more than the concept behind it. I was introduced to software testing just by getting to know the shell-script based test automation part that was done in my department. Over the course of one or two years, we found out that we got a problem.
So, I started to dig deeper, and came across Lessons Learned in Software Testing. The ideas blew my mind. As a result from that I got paralyzed, rather not knowing what to do about it. It took me nearly half a year to incorporate my knowledge back to action.
“So, what should I do tomorrow?” is a question I would have asked by that time. Today, I know more things, maybe, but still the problem behind it exists. New testers coming from the university, lacking knowledge of software testing due to lack of courses or interests at the university, get into our profession, and are faced with the impossible struggles, that you can’t automate everything, can’t test everything, can’t assure anything. More often than not, these testers don’t get proper job introductions, don’t get formal classroom training – or maybe just too late – and need to self-educate themselves a big deal.
So, instead of paralyzing these testers, there must be something better. Sure, there are a bunch of great books out there, but personally I started to hate most testing related books. They are neither brief, they don’t tell real-world stories, and translating the concepts and ideas into action is a hard thing. In addition it’s hard to find out which books to read while the thought-leader of the testing community keep on fighting about vi vs. emacs in testing.
There are indeed some rays of hope. Matt Heusser for example is working on a book titled “Testers at work”. Just the title makes me wallow in great hopes. “Beautiful Testing” edited by Adam Goucher is another one (though I still haven’t read it, yet). Instead of arguing one over another all the time, I think time has come to actually help new people getting into the field and master our craft. Interactions with developers, interactions with project managers, interactions with superiors and other testers are all circumstances a new tester will run into. Leading new testers astray here in the beginning is a very bad thing to do. How come a lot of sticky minds are fostered in our profession? Do they just end up as testers, since they can’t find another job as a developer maybe? I think time has come to change the picture of testers in our industry by actually doing something different and helping others do the same. Leading by example instead of arguing Windows vs. Linux.
What ideas about it do you have in mind?
The Craft of Software Testing
The inaugural issue of the Agile Record magazine includes an article from myself about some thoughts on the craft of software testing. In it I describe my understanding about the craft and how testers may reach a level of professionalism where they know what they’re doing. Deliberate practice is one thing in that equation.
Looking through the magazine, I spotted great articles. Dawn Cannan co-authored with Lisa Crispin an article on being the worst and why it pays off, David Evans has an article on testability and considerations on whether to invest in it or not, Lior Friedman has an article on the agile view on quality, Marc Bless has an article on Scrum forced on an organization, etc. Before listing up all the contents here, go to the Agile Record page, subscribe (it’s free), and get your personal copy.
Adaptation and Improvisation
Dale Emery influenced this blog entry with his paper on Writing maintainable automated tests. At the essence I’m going to compare several test approaches: Unit Testing (i.e. Test-Driven Development), Functional or Acceptance Testing and Exploratory Testing. I’ll look at means necessary for regression capability (checking), costs for testing and costs for adaptation to changing requirements.
Continue reading Adaptation and Improvisation