Meeting Principles

Yesterday I ran over a list from Esther Derby on how to improve meetings when you’re not in charge. Funnily I had compiled a similar list at our company some time ago divided into participant improvement actions and moderator improvement actions for meetings. The list is thought as a motivation compiled of optional things I can do to improve the meeting. The basic principle behind this is that I am allowed to do this and that with the intention if I’m not doing these things, I should not complain about ineffective meetings. Here is the list. I appreciate any kind of feedback.

As a participant of a meeting I am allowed to

Preparation

  • … ask for a meeting agenda
  • … prepare contemporary for the meeting
  • … decline an invitation
  • … look forward to the meeting

Performance

  • … ask for an introduction of unknown participants
  • … reflect on the agenda and the goal of the meeting
  • … visualize
  • … value contributions of other participants
  • … get clarification on contributions of other participants

Wrap-Up

  • … offer appreciation to the moderator or facilitator
  • … work through the protocol
  • … reflect on the course of the meeting
  • … discuss personal discomfort

As a moderator/facilitator of a meeting I am allowed to

Preparation

  • … provide a meeting agenda
  • … communicate intentions and goals of the meeting
  • … choose participants wisely and personally get in touch with them
  • … prepare myself for the meeting

Performance

  • … feel responsible for the success of the meeting
  • … welcome participants
  • … repeat the goal and the agenda of the meeting and ask for feedback
  • … help meeting participants work effectively and efficiently
  • … remind participants on agreed conversation rules

Wrap-Up

  • … ask for feedback on the grade of success from participants
  • … thank everyone for the investment of their time and knowledge
  • … take care to monitor follow-up actions
  • … reflect on personal facilitation and moderation abilities

Some links from the week

Ben Simo explains his view on Best practices related to the first two principles of the Context-driven school of testing and makes a fantastic conclusion:

No process should replace human intelligence. Let process guide you when it applies. Don’t let a process make decisions for you.

Seek out continuous improvement. Don’t let process become a rut.

Process is best used as a map; not as an auto-pilot.

Matt Heusser came up with some thoughts regarding failing Agile teams. Indeed some time ago Alistair Cockburn already discussed this topic in more detail. I found both quite worth reading.

Developer-tester, Tester-developer

During this week I watched the following conversion between Robert C. Martin and Michael Bolton on Twitter:

Uncle Bob
@dwhelan: If you’ve enough testers you can afford to automate the functional tests. If you don’t have enough, you can’t afford not to.

Michael Bolton
Actually, it’s “if you have enough programmers you can afford to automate functional tests.” Why should /testers/ do THAT?

Uncle Bob
because testers want to be test writers, not test executers.

Michael Bolton
Testers don’t mind being test executors when it’s not boring, worthless work that machines should do. BUT testers get frustrated when they’re blocked because the some of the programmer’s critical thinking work was left undone.

Uncle Bob
if programmers did all the critical thinking, no testers would be required. testers should specifying at the front and exploring all through; not repeating execution at the end.

Michael Bolton
I’m not suggesting that programmers do all the critical thinking, since programmers don’t all of the project work. I am suggesting, however, that programmers could do more critical thinking about their own work (same for all of us). Testers can help with specification, but I think specification needs to come from a) those who want and b) those who build.

Over the weekend I thought through the reasoning. First of all I truly believe that both are right. Elisabeth Hendrickson stated this in the following way:

In any argument between two clueful people about The Right Way, I usually find that both are right.

Truly, both Uncle Bob and Michael Bob are two clueful people. In the conversation above they seem to be discussing about The Right Way, and I believe they are both right – to some degree, in some context, depending on the context. Basically this is how I got introduced to the context-driven school of testing.

The clueful thing I realized this morning was my personal struggle. Some months ago I struggled whether I am a tester-developer or a developer-tester. Again raising this point in my head with the conversation between a respected developer and a respected tester opened my eyes. Three years ago I started as a software tester at my current company. Fresh from university I was introduced into the testing department starting with developing automated tests based on shell-scripts. Finally I mastered this piece and got appointed to a leadership position about one and a half years later. Until that point I thought testing was mostly about stressing out the product using some automated scripts. Then I crossed “Lessons Learned in Software Testing” and got taught a complete new way to view software testing.

The discussion among these two experts in their field raised the point of my personal struggle. Testing is more than just writing automated scripts. Over and over executing the same tests again is a job a student can do. It’s not very thought-provoking and it gets boring. Honestly, I haven’t received a diploma in computer science (with a major in robotics) to stop thinking at my job. Therefore I got into development topics. Since our product doesn’t include good ways for exploratory testing, I came up with better test automation. Basically the tools I invented for the automated tests also aid in my quest to be good as an exploratory tester.

So what was I struggling with? Basically I realized that on my job the programmers get all the Kudos. That’s why I started investing time in becoming a better programmer. Meanwhile I found out that it’s also a fun thing to do. You can see the results whenever you run your programs. This is why I never gave up looking at code, dealing with it. Sure, it’s not what Michael Bolton or Jerry Weinberg mean with software testing. On the other hand it’s what I would like to do.

Basically I consider myself a developer for software test autmation. I have a background in software development and I have an understanding of software testing. (I leave it up to my clients and superiors whether I’m a good one or not.) In the way I understand my profession it’s necessary for me to know about both sides. This is also why I now realized that I am a tester-developer, not a developer-tester. As I just realized my personal struggle comes from the aspect that I am not sure, whether I really ever was a software tester or not. But for sure I have started to become a developer.

In order to conclude this posting with the initial discussion from the two experts in their fields, there is one thing left to say. As a software tester you may choose to become a developer of test automation. Robert Martin refers to these kinds of testers. On the other hand as a software tester you may choose to become a tester who applies critical thining. Michael Bolton refers to these kinds of testers. Whether or not to choose one path over the other may be up to you.

Misunderstood metrics

My Miagi-Do school mentor Matt Heusser placed a blog entry on metrics today. Since I haven’t got the clear problem with metrics, I needed to contact him to fulfill The Rule of Three Interpretations. In our conversation I realized that he was referring to a concept, which I haven’t been experiencing in my three years of working as a software tester.

Generally spoken I was referring to metrics as using FindBugs, PMD or cover coverage tools on your software. For some time we have been using these for the framework that we grew on top of FIT for testing our software product. In combination with Continuous Integration you can see improvements and you can see where your project currently is. Is it in a good shape? Where might be holes? Which codepaths are not tested well enough? This feedback is essential for management to make the right decisions about the risks when delivering the product against your test results.

On the other side Matt refers to metrics on a different level. If your annual reviews and your salary gets decided upon management metrics, these are evil. The struggle I have is, that I am in the situation that I never worked in such a system where my personal performance and salary was based on some metric. Basically I can think of situations where this is evil. Here are a few:

  • A software architect getting paid by the number of architectural pages written.
  • A software developer getting paid by the number of lines of code.
  • A software developer getting paid by the number of software products finished.
  • A software tester getting paid by the number of tests executed/automated.
  • A software tester getting paid by the number of bugs found.

Speaking as a software tester, I would use a tool for generating the easy test cases that are easy to automated (I have been down this rabbit hole, I just realize, but wasn’t getting paid based on that) or use a spell-checker on our logfiles (I always wanted to do this, but didn’t because there are more severe problems than the correct spelling of some debug log messages). As a colleague uses to put it: When you measure something, you change the thing you’re measuring. Be careful what you measure, because it just might improve. When I understood these different meanings of metrics, I also got the problem.

Some time later I found the origin for the discussion. It is a recent statement from Tom DeMarco reflecting over 40 years of software engineering and measurements. Take the time to read. Here is the portion which I found most interestingly:

So, how do you manage a project without controlling it? Well, you manage the people and control the time and money. You say to your team leads, for example, “I have a finish date in mind, and I’m not even going to share it with you. When I come in one day and tell you the project will end in one week, you have to be ready to package up and deliver what you’ve got as the final product. Your job is to go about the project incrementally, adding pieces to the whole in the order of their relative value, and doing integration and documentation and acceptance testing incrementally as you go.”

I believe that this will work. At least it will keep the team from being micro-managed and over-measured.

Mindful readings about Software Craftsmanship

While looking through my personal backlog of blog entries, I found this one today. It cites a quotation from Uncle Bob Martin in one of his blog posts in April. Here is the quote:

I see software developers working together to create a discipline of craftsmanship, professionalism, and quality similar to the way that doctors, lawyers, architects, and many other professionals and artisans have done. I see a future where team velocities increase while development costs decrease because of the steadily increasing skill of the teams. I see a future where large software systems are engineered by relatively small teams of craftsmen, and are configured and customized by business people using DSLs tuned to their needs.

I see a future of Clean Code, Craftsmanship, Professionalism, and an overriding imperative for Code Quality.

The related article was named Crap Code Inevitable? Rumblings from ACCU. Today I remember, that I wanted to quote that article by that time to be a mindful reading. After having read over it again, this point is still pending.

First, the mentioning of doctors reminds myself of a visit at my doctor in May. I had a problem raising my arm after having exercised too much. After initially stating the problem, my doctor told me to stand up, raise my arm this way, rais my arm that way, raise my arm in another way, and then he had identified the problem. This was amazing when I realized that this way of analyzing a problem in the software is not as efficient. On the one hand it took him no more than 5 minutes to find the cause. On the other hand I realized his level of expertise at this. Clearly I doubt that there was a course back in university held, where my doctor learned this. Basically I consider that he knew how the muscles and fibers are connected with each other. But I clearly doubt that back in his university times there were practial courses where an injured patient with a problem in his arm like myself was asked and evaluated in front of the students. Likewise, even though I did not have a course on test-driven development, but I can take the conscious decision to apply it and communicate my intentions to my colleagues. For this to work I take my personal experiences with TDD and simply do it. Similarly this applies for Acceptance test-driven development. Every day anew I can take the decision to give the best I can in order to delight my colleagues and my customers. Personally I consider this to be an act of professionalism.

On the other hand the quote from above reminds me also about a problem I have just lately read about on Twitter from Brian Marick:

I detect a certain tendency for craftsmanship to become narcissistic, about the individual’s heroic journey toward mastery. People who think they’re on a hero’s journey tend to disregard the ordinary schmucks around them.

Heoric journeys are a problem. Mostly I refer here to an insight from Elisabeth Hendrickson and a work which I think was from Alistair Cockburn, but don’t know for sure anymore. The problem with our education system is that during school you’re the one that fights on your own during the exam courses. In the university it’s your work that gets graded. For PhDs this is even more dramatical (as I have been told, no personal experience with this, though). Then when you get into your first job, you are asked to do team work. But where should you have learned this? The whole value system that worked all of your life gets collapsed. So, what do you do about this? People being “inconsistent creatures of habit” create their walls around them making their work safe against the rants of others. But – and now comes my reply to Brian’s statement above – the Software Craftsmanship Manifesto states differently. Software Craftsmanship is about taking apprentices, teaching what you have learned, what has worked for you, build a community of professionals for valueable exchanges just like the teams from Obtiva and 8thLight has proven to us. This is our responsibility to do. This is professionalism in the sense of Software Craftsmanship and it’s among the things we value.

Something new

Today I decided to join it: Twitter. After spending some time playing around with some settings and searching for some interesting people to follow, I am still wondering what makes people curious about it. What made me join then? Today I got a notice on Xing about the XP Days Germany on Twitter. Curiosity killed the cat. Personally I decided to try it out for some time and see what might happen. I included my updates on the right, as you might have noticed.

Testability vs. Wtf’s per minute

Lately two postings on my feed reader popped up regarding testability. While reading through Michael Boltons Testability entry, I noticed that his list is a very good one regarding testability. The problem with testability from my perspective is the little attendance it seems to get. Over the last week I was inspecting some legacy code. Legacy is meant here in the sense that Michael Feather’s pointed it out in his book “Working effectively with Legacy Code”: Code without tests. Today I did a code review and was upset about the classes I had to inspect. After even five classes I was completely upset and gave up. In the design of the classes I saw large to huge methods, dealing with each other, moving around instances of classes, where no clear repsonsibility was assigned to, variables in places, where one wouldn’t look for them, etc. While I am currently reading through Clean Code from the ObjectMentors, this makes me really upset. Not only after even ten years of test-driven development there is a lack of understanding about this practice, also there is a lack of understanding about testability. What worth is a class, that talks to three hard-coded classes during construction time? How can one get this beast under test? Dependency Injection techniques, Design Principles and all the like were completely absent on these classes. Clearly, this code is not testable – at least to 80% regarding the code coverage analysis I ran after I was able to add some basic unittests, where I could. Code lacking testability often also lacks some other problems. This is where Michael Bolton, James Bach and Bret Pettichord will turn in heuristics and checklists, the refactoring world named these as Smells.

On the Google Testing blog was an entry regarding a common problem, I also ran into several times: Why are we embarrassed to admit that we don’t know how to write tests? Based on my experience project managers and developers think that testers know the answers for all the problems hiding in the software. We get asked, “Can you test this?”, “Until when are you going to be finished?” without a clear understanding of what “tested” means or any insight what we do most of the time. “Perfect Software – and other illusions about testing” is a book from Jerry Weinberg from last year, which I still need to read through in order to know if it’s the right book to spread at my company – but I think so. If a develoepr doesn’t know about the latest or oldest or most spread technology, it’s not a problem at all. If a tester does not know how to “test” this piece of code, it is. He’s blocking the project, making it impossible to deliver on schedule – escalation! What Misko points out in his blog entry is, that the real problem behind this is also testability:

Everyone is in search of some magic test framework, technology, the know-how, which will solve the testing woes. Well I have news for you: there is no such thing. The secret in tests is in writing testable code, not in knowing some magic on testing side. And it certainly is not in some company which will sell you some test automation framework. Let me make this super clear: The secret in testing is in writing testable-code! You need to go after your developers not your test-organization.

I’d like to print this out and hang it all over the place at work.

Overview of Agile Testing

In the just released July issue of the Software Test and Performance Magazine there is an article from Matt Heusser, my mentor in the Miagi-Do School of Software testing, and Chris McMahon introducing to the most basic terms surrounding Agile Testing. Before they both wrote them down, I was able to provide some feedback on it. It seems to be enough feedback to get a mentioning at the end of the article. Basically I’m pleased to be able to provide my help.

Since I mentioned the Miagi-Do School, I have to make clear that the term school is not meant in the Kuhnian way. James Bach pointed out to me that he uses the term school regarding the five schools of software testing (Analytic, Standard, Quality, Context-driven and Agile) as a school of thought in such a way. This means that to adapt a mindset. Like James put me out, there are few circumstances where being driven by the context is contra-productive and made me think about three such situations. By the way: Can you think of three situations where being driven by the context of the situation is unnecessary?

From my understanding that I got from Matt’s Miagi-Do school it is not to be understood in the Kuhnian way.

Just because you can, doesn’t mean you should

The following I got from a post on design principles related to object oriented code. You can find the whole enchilada here.

Today I was surprised that – while the principle of single responsibility is rather new in the software world – this principle is known for a long, long time in the testing world. Why?

On the project I’m currently working on we got our requirements as an database dump. Our developers decided to generate the configuration – our system under test – using some scripts, extracting it from the database directly and transforming it into the configuration for the software that we will deliver.

The test team that was asked to do the test automation for this configuration then was asked to built some automated tests for use in FitNesse. What they now did was to do pretty much the same as our development and have the test cases generated from the database. This may sound like a reasonable approach.

So, where’s the problem? The problem is that the resulting tests are not aware of the context. What is the context? The context is, that we need to deliver some rating software to a customer in Brasil. The tax system in Brasil is divided by the 27 states. We have four major bunches of baselines, each with about 8 to 10 variations in the baseline price (20, 40, …). The resulting configuration consists of about 4 times 10 times 27 (= 1080) different single variations. For each single variation there are about 20 to 100 single test cases necessary. The generated test suite now consists of a major bunch of tests (> 100000), which run on an overall system level against our delivered system with a test execution time of about 45 seconds. This results in an exhaustive regression test suite which takes 24 hours to execute. Due to the payload generated the FitNesse system is not handling these tests in a stable way, causing some crashes due to the amount of test cases in the overall structure.

So, restating the situation from a different perspective: The configuration gets generated and adaptation to changing requirements just takes a few hours for the developers. Test case execution is near to be impossible. On the other hand there are high test maintenance costs attached to the tests generated resulting in a high amount of rework necessary to keep up with the development team. Delivering feedback quickly to the developers is unlikely to happen. So, what’s the reason to have this high amount of never executed test cases in first place?

This is where I get to the topic of this post: Just because you can, doesn’t mean you should. Just because it is possible to generate an exhaustive test suite for test automation, does not mean you should do this. When the resulting test execution times perform badly and even cause the test system to crash once in a while due to the high amount of tests included, you should refuse to do so.

So what should I do? Clarify your mission for the testing activities. What is the goal of your testing? Do you want to find risky problems quickyl? Then find another approach. Do you want to prove that everything is fine ignoring the approach of the development team? Then you should build an exhaustive test suite – probably. (I would suggest to look for a better approach to get feedback in 90 minutes maximum on it.) Know your context. How is the configuration generated? If errors reported by your test suite will be the same for all the different states, then you should re-think your approach. If there is a complex logic attached to generating the configuration, then probably you should consider testing that logic and just send out some tracer bulletts from end to end. What you can always do is to reconsider your current approach and check for opportunities to improve. The time that you might be able to win due to this, can be taken for follow-up testing and catching up in other activities. Maybe you can spend some more time pairing with your developers on the unit tests?