Test Automation Polarities (from my TestBash talk)
Posted by Matt Archer on March 26, 2013
Last week I spoke about balancing test automation techniques at the TestBash conference in Brighton (England). Judging by the tweets that welcomed me as I walked off stage, my primary message was well received. That message was that teams should select and tailor their approaches to test automation so that they match their specific needs (context).
As part of speaking about this topic, one of the slides I included contained a table of what I called ‘test automation polarities’. A table inspired by James Bach’s list of exploratory testing polarities, which can be found on page 4 here. The way I describe this table is as a “thinking framework” or as “an aid to our intuition” (at least that is what I wrote in my speaker notes!).
What this means is that this table won’t tell you whether the test automation you plan to create (or have already created) is fit-for-purpose or is the worst example of test automation in the world. What it will help you do is think about the type of automation you want to create (if you are still in the pondering stage) and/or decide why your current test automation leaves you with nagging doubts.
You can think of the table as being similar to a test ideas list, but rather than prompting you to ask questions about the software you are testing, it prompts you (and your team members) to ask questions about the automation you plan to create or have already created. As examples, consider; “I want to run these automated tests before every check-in, but I’m not sure they run fast enough?” And, “The automation appears to be outstripping the speed of the software, maybe it should run more slowly?”
Notice that these two questions could drive the automation in opposite directions, depending, of course, on the conclusions that were drawn. By presenting a collection of polar opposites, the table below should also help you in identifying conflicting test automation objectives. That said, it only explicitly highlights conflicting objectives that lie on the same scale / axis. I will leave you to independently consider the many other (potentially) conflicting objectives that exist between the different rows of the table. Fast, end-to-end test automation anyone!?
|Test automation polarities|
|Checking important scenarios vs. Checking all scenarios|
|Scripted expected results vs. Captured expected results|
|Predefined actions vs. Random actions|
|Code for checking vs. Code for exploring|
|Run on a trigger vs. Run on demand|
|Fast vs. Slow|
|User perspective vs. Technical perspective|
|Throw away vs. Continual investment|
|Atomic vs. End-to-end|
|Recorded vs. Written|