Not every tester should code, but knowing how to spot risky automation can be beneficial
Posted by Matt Archer on February 7, 2013
|This post is part of the tips for manual testers working in an agile environment series. A series of posts inspired by the topics covered in the Techniques for Agile Manual Testers course that is currently available to take in London (via the Ministry of Testing) and in Copenhagen (via PrettyGoodTesting).
This tip is about automation. Not because I think every tester should learn how to code, but because it is useful for manual testers to be able to recognise automated tests that may not meet expectations.
The majority of agile teams create different types of automated tests. Some focus on a few lines of code, others encompass chains of integrated systems. Some run in milliseconds, others minutes. Some interact via the user interface (UI), others probe underlying services.
When relying on automated tests, the most risky and unreliable tend to be those that cover the largest area, take the longest to run and interact with volatile aspects of our software, like the UI. We cannot always avoid these tests, but a team can aim to use them sparingly.
This is where the test automation pyramid1 can help identify risky strategies by suggesting a ratio between focused, quick, beneath-the-UI tests and broad, slow, against-the-UI tests. Predictably, the ratio is in favour of the focused, quick, beneath-the-UI tests.
If your team’s automated tests go against this ratio, I recommend taking a closer look. If what you discover is of questionable value, discuss with your team how manual testing can temporally help, including any extra support and resources required to fill the gap.
1a. A blog by Martin Fowler about the test automation pyramid: Link
1b. Enter “test automation pyramid” into Google for countless others: Link
If you have a comment or question about this particular tip, please do not hesitate to Leave a Reply. A complete list of tips is listed below.