Exploratory Testing – Scanning

Exploratory Testing is a useful technique. Personally I found a model from my past, which explains Exploratory Testing based on face-detection systems. There are three different posts in this series: Learning, Scanning and the Differences. This second entry in the series copes with the scanning.

Introduction

In the previous entry I compared Exploratory Testing to the learning phase in the face-detection training program we had back in university. The training is an offline step in order to prepare a detector which can be used for real-time evaluation of images. The training program would generate a file, which can be used to scan images. This entry will cope with the detection of faces in images. In retrospection I would like to add that there is a difference between the face-detection – that is detecting an area in an image to contain any face – and face-recognition – that is recognizing that this particular face in the image is the face from a particular person, i.e. Markus Gärtner. Face-detection is the identification of possible faces, so we can evaluate these distinct portions of the picture to recognize which faces are there – which is the more costly operation.

Scaling

In the actual detection phase the detector is used to scan through an image. Remembering first entry in the series the detector had multiple levels of classificators, with a 20×20 pixel window on the lowest level. Since not all pictures guarantee to contain faces of 20×20 pixel in size, the detector is applied in multiple scalings onto the image. This results in several detection steps. Back in 2003 we played around with VGA images (640×480) and needed to scan the images in several scaling steps with a scaling factor ranging between 1.25 and 1.5 of the previous scaling step.

In Exploratory Testing we do similar steps. Continuously we change our scale up and down, fiddling around with the low-level data-files of our application, or even get the newest hip-feature in our product on a business-facing level. By continuously diving into several abstraction levels of our models about the software that we test, we scan the software for possible problems on multiple levels in order to get a profound picture of our application.

Rating

When detecting faces in images you usually want to have a single detection when there is a single face in the image. Scaling the detector on multiple scaling factors may result in multiple detections in the same area of the picture. Therefore we introduced a post-processing step to the detected faces in an image. We merged several similar hits together to one single area in the image, where a face actually was. Therefore we needed to find overlapping areas in the recognized hits.

The bridge to Exploratory Testing here lies in the follow-up testing on identified bugs. By testing around bugs that we already found and recently bug-fixed areas of our product, we elaborate the application against new bugs or similar bugs. When we find a bug in Exploratory Testing, we go deeper on it. We try to find out what the problem behind it is and make the bug report as complete as possible. Occasionally time might constrain our follow-up testing or we might not be able to test deeper on a bug, since the feature we’re looking for is blocked by another bug.

Voting

Since our detectors did not guarantee to find only faces there was another challenge we had to deal with: From time to time there were so called false-positives in the images. Areas without a face were detected as a face by our detectors. Therefore we introduced a voting scheme. If and only if a detected face was detected by at least five different hits that were previously merged together, we accepted it as a face.

In Exploratory Testing this is why we actually do follow-up testing. If there is a bug, we want to know how severe it is. Like the voting scheme we introduced for the face-detection – which is nothing else than a severity detector for the area probably containing a face – the follow-up testing gives us hints about the impacts of the bug that we found. When diving deeper and testing around the variables of the bug, we identify how much of the application is actually blocked and how many of the business requirements are not fulfilled in the program.

Conclusion

There are also some parallels between Exploratory Testing and the detection phase of our face-detection approach we used back in university. In Exploratory Testing we test in multiple scales of our model of the system – just as a face-detector is applied at multiple scale stages on an image. By combining identified bugs if the overlap is relevant, we build a single bug report which has a high probability of getting fixed – just as we combined multiple overlaps of detected faces to a single face detected in the image. The severity of the bug is guided then by our observation about the overlap – just as we filtered out false-positives based on a user selectable threshold level in the face-detection system.

Thus far I have introduced the parallels between Exploratory Testing and face-detector training and face-detection. This view is of course a bit simplistic and I will dive into the major differences between the two in the next entry in this series.

  • Print
  • Twitter
  • LinkedIn
  • Google Bookmarks

Leave a Reply

Your email address will not be published. Required fields are marked *