ParkCalc automation – Refactoring a data-driven test

Over the weekend I introduced into ParkCalc automation. Today, we will take a closer look on the third test in the provided test examples, and see how we can improve it. Before I do this, I will point you to two great articles from Dale Emery. The first is tenish pages piece where he walks through a login screen. Uncle Bob showed the same example using FitNesse with Slim. In the second he describes a layered approach to software test automation in a very well manner. Together with Gojko’s anatomy of a good acceptance test this gives us some picture where we should be heading.

First of all, I set up a github repository, which you can fork and play around with on your own. The repository is ParkCalc on github. The state after downloading the zip file with the examples can be checked out or downloaded from this snapshot.

Before doing anything we should run the tests to check whether they pass.


./runParkCalc.py parkCalc1 parkCalc2 parkCalc3

This should end up with a screen output similar to this:

==============================================================================
parkCalc1 & parkCalc2 & parkCalc3                                             
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc1                                   
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc1.Calc1 :: A test suite with a ba...
==============================================================================
Basic Test                                                            | PASS |
------------------------------------------------------------------------------
parkCalc1 & parkCalc2 & parkCalc3.parkCalc1.Calc1 :: A test suite ... | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc1                           | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc2                                   
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc2.Calc2 :: A test suite with a ba...
==============================================================================
Basic Test                                                            | PASS |
------------------------------------------------------------------------------
parkCalc1 & parkCalc2 & parkCalc3.parkCalc2.Calc2 :: A test suite ... | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc2                           | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc3                                   
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc3.Calc3 :: A test suite with a ba...
==============================================================================
Basic Test                                                            | PASS |
------------------------------------------------------------------------------
parkCalc1 & parkCalc2 & parkCalc3.parkCalc3.Calc3 :: A test suite ... | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3.parkCalc3                           | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
parkCalc1 & parkCalc2 & parkCalc3                                     | PASS |
3 critical tests, 3 passed, 0 failed
3 tests total, 3 passed, 0 failed
==============================================================================
Output:  /eigene/robotframework/ParkCalc/output.xml
Report:  /eigene/robotframework/ParkCalc/report.html
Log:     /eigene/robotframework/ParkCalc/log.html

Before we start, I would like to give the directories a little bit of loving. parkCalc1 is a spike solution, which we should get rid of. It’s checked in, so we may recover it, if we need it, but we should avoid to confuse anyone about it.


git rm -r parkCalc1

Now let’s rename the examples into something more meaningful. parkCalc2 is a keyword-driven test, so we’ll rename it to that. parkCalc3 is a data-driven test, the same applies here.


git mv parkCalc2 keyword-driven
git mv parkCalc3 data-driven

Now, let’s leave the keyword-driven tests for now, as we’re going to deal with them in our next session. cd’ing into the data-driven directory now, we get calc3.txt which contains the test, and resource.txt which contains helpful functions. Let’s rename calc3.txt as it really does not reveal it’s intention.


git mv calc3.txt functional.txt

We rename it to functional.txt to appreciate that we’re going to deal with function tests. Now, having renamed and cleaned up most of the folders, let’s run the tests to see whether they still pass, and we didn’t break anything by our renamings


(cd .. && /runParkCalc.py keyword-driven data-driven)

Phew. Everything still working. So we may check this in.

Now, let’s take a look into the test, and the oracle for the parking rates that we’re going to use. Regarding the domain, there are five types of parking: Valet parking, Short-term parking, Long-term Garage parking, Long-term Surface parking, and Economy Lot parking. This motivates five distinct tests for me for each type on test with several test data. Since the existing example is based upon Valet Parking, let’s focus on this. I change the test name from Basic Test to Valet Parking Test.


Valet Parking Test
    Park Calc  Valet Parking  05/04/2010  12:00  AM  05/05/2010  12:00  AM  $ 42.00

Now, there’s duplication. The test name expresses that we deal with Valet Parking tests as well as the test item we got there. So, we introduce a new keyword, called Valet Parking, and edit the resource.txt file to call Park Calc Valet Parking when we call it:


Valet Parking  [Arguments]  ${entryDate}  ${entryTime}  ${entryAmPm}  ${exitDate}  ${exitTime}  ${exitAmPm}  ${expectedPrice}
    Park Calc   Valet Parking  ${entryDate}  ${entryTime}  ${entryAmPm}  ${exitDate}  ${exitTime}  ${exitAmPm}  ${expectedPrice}

Let’s run the test, to see it’s still passing.


../runParkCalc.py .

Since we touched the resource.txt file, we should have noticed that the Park Calc keyword is simply taken eight parameters called arg1-arg8. To leave the campground a bit cleaner as before here, let’s give these parameters some intent revealing names. After seeing the test is still passing, we should submit our changes again.

As I’m skeptic about a provided test, I check back with the oracle whether it’s correct or not. The oracle says, that for valet parking $18 per day, or $12 for five hours or less shall be charged. Our initial test expects that $42 get charged for a single day. This is wrong from my perception, so I change the expected value to $18, and of course the test should fail when I run it.


    Valet Parking  05/04/2010  12:00  AM  05/05/2010  12:00  AM  $ 18.00

Now, let’s add a line to check whether five hours or less are charged correctly at $12:


    Valet Parking  05/04/2010  12:00  AM  05/04/2010  05:00  AM  $ 12.00

If you added this line on the bottom, you’ll notice that the test aborts execution as the first line fails. We achieve this by prefixing the Page Should Contain keyword in the Park Calc with Run Keyword And Continue On Failure:


...
    Run Keyword And Continue On Failure  Page Should Contain  ${expectedPrice}
...

So, parking for five hours is calculated correctly, for a single day there is an error. Let’s add two test cases for, one for less than five hours, and another one for three days:


    Valet Parking  05/04/2010  12:00  AM  05/04/2010  01:00  AM  $ 12.00
    Valet Parking  05/04/2010  12:00  AM  05/07/2010  12:00  AM  $ 54.00

When running the test, the first one passes, the second one doesn’t. I think there is a problem with the rule that a single day gets $18 per day charged.

Before we check this in, let’s introduce setups and teardowns, so that the browser is just started once, as this might get more and more annoying as we work through the application. We extract the first two lines of the Park Calc keyword into it’s own keyword in the resource.txt:


    Open Browser  http://adam.goucher.ca/parkcalc/  firefox
    Set Selenium Speed  0
    Title Should Be  Parking Calculator

and delete the close browser statement at the end, and then setup the suite with a suite setup and a suite teardown in functional.txt:


Suite Setup     Open Park Calc Page
Suite Teardown  Close Browser

As we’re already working on some improvements, let’s use the ${DELAY} variable for the selenium speed setup, and extract the page to run against into a variable, which can then be changed. Additionally, we should make sure to use the browser mentioned in the ${BROWSER} variable, so that it’s easier to change this in case you haven’t got firefox installed or don’t want your test to run against it. When the tests still fail for the same reasons, we may check our changes for Valet Parking in.

I’ll leave working through the remaining four parking lot types with the data-driven approach open for you. Here is what I generated from the oracle in the same manner as before in case you may want to peek.

Overall, I’m not very satisfied with the tests so far. I would prefer to express may test data as durations, irrelevant of the particular date. This would help the reader dramatically say the case here. I will not yet deal with this, as this might be some more advanced topic for the future. What I especially don’t like in the current state is the error reporting. I see that a particular price is reported back that should have been there, but the price that was there is not reported directly on the command line, so I have to peek into the html reports and parse the html in the output there with my own eye. In the keyword-driven example I have solved this problem already, so we may incorporate a new keyword here as well:


    Run Keyword And Continue On Failure  Calculated Cost Should Be  ${expectedPrice}

Calculated Cost Should Be  [Arguments]  ${expectedCost}
    Click Button  Submit
    ${actualCost} =  Get Text  xpath=//tr[td/div[@class='SubHead'] = 'COST']/td/span/font/b
    Log  Actual costs: ${actualCost}
    Page Should Contain  ${expectedCost}

which is then called from the Park Calc keyword instead of Page should contain. This will at least make sure I can grasp the actual costs from the generated log.html.

[Update]
Now, I was going to leave these data-driven examples as they were, but this morning I noticed how mediocre the tables were. Based on Dale’s paper, we should extract the dates and give them some more meaningful names. Let’s do this by creating a variables section in functional.txt and creating a variable for one hour of parking:


*** Variables ***

@{FOR_ONE_HOUR}      05/04/2010  12:00  AM  05/04/2010  01:00  AM

Now, after seeing the tests passing, we can replace the hard-coded values with the variable reference one by one.


    Valet Parking  @{FOR_ONE_HOUR}  $ 12.00
...
    Short-Term Parking  @{FOR_ONE_HOUR}  $ 2.00
...
    Long-Term Garage Parking  @{FOR_ONE_HOUR}  $ 2.00
...
    Long-Term Surface Parking  @{FOR_ONE_HOUR}  $ 2.00
...
    Economy Parking  @{FOR_ONE_HOUR}  $ 2.00

Thereby the tests get a bit more readable. We continue the extraction process until we extracted all meaningful data into their own variables. The resulting tree can be found here.

That’s it for today. Next time we’ll deal with refactoring the keyword-driven approach.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

7 thoughts on “ParkCalc automation – Refactoring a data-driven test”

  1. Great post! I especially like how you explain the reasoning behind every refactoring steps. Looking forward for more posts in this series!

    Two comments on how you could possibly still improve your tests:

    1) When creating data-driven tests with Robot Framework, the new test template functionality [1] in RF 2.5 is very useful. This feature allows you to create test cases with just the relevant input/output data without repeating the keyword to use multiple times. Additionally, you don’t need to use `Run Keyword And Continue On Failure` anymore because the continue on failure mode is automatically active when templates are in use.

    2) When running multiple files/directories at once (e.g. `pybot tests.txt more_tests.txt`) the name of the top-level test suite is created by combining the lower level suite names (e.g. `Tests & More Tests`). If you want a more meaning full name, you can set it with `–name` option [2] yourself. This feature is often useful with single test file/directory too, a good example is:

    pybot –variable BROWSER:Firefox –name Tests_with_Firefox web_tests.txt
    pybot –variable BROWSER:IE –name Tests_with_IE web_tests.txt

    [1] http://robotframework.googlecode.com/svn/tags/robotframework-2.5/doc/userguide/RobotFrameworkUserGuide.html#test-templates
    [2] http://robotframework.googlecode.com/svn/tags/robotframework-2.5/doc/userguide/RobotFrameworkUserGuide.html#setting-the-name

    1. Thanks for the kudos and the hints. Dale Emery replied on a mailing list raising the topic to – what I call – extract variable until you drop. With robot framework the overriding of variables makes this very convenient. I am considering to try out test templates next, though I don’t know when I may find the time to continue this. I had also a comparison with FitNesse/Slim in mind, but this may need some preparation.

Leave a Reply

Your email address will not be published. Required fields are marked *