Matt Archer's Blog

  • LinkedIn

  • Twitter

  • Twitter Updates

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.

  • New Book

    I'm working on a new book. View the home page for the 'Tips for manual testers working in an agile environment' book
  • Enter your email address to receive notifications of new posts by email.

  • Subscribe

Risk-Based, Cross-Browser Testing with Scrum

Posted by Matt Archer on November 8, 2011

Risk Based Testing

In its heyday, risk-based testing1 was the buzz-word of the industry; every tester was talking about it and every project had it as part of their testing strategy (even if it was only given lip-service in a superficial strategy document somewhere).

Today, it is all about being lean2 and agile3 – testing early, testing often and having a zero tolerance to defects. It’s a good way of working, but have the guiding principles associated with agile software development made us lose track of one of the most important facts of software testing? That ensuring quality by testing everything is impossible. That even at its best, testing is a risk reduction exercise and different projects have wildly different tolerances to risk.

Many of the Scrum teams I join follow a delivery pattern that is made up of a number of internal releases, followed by a final public release. In terms of timeframes, this could mean an internal release every 2 weeks over the course of 4 months (for example) before a final public release at the end of that 4 month period. When it comes to planning the testing effort, this delivery pattern introduces an extra dimension that is rarely discussed on agile projects of this nature. That dimension is time.

In situations such as these, our first option is to ignore the total amount of time available to test (typically multiple sprints’ worth) and pretend that the software produced each sprint will be publically released. By working in this way we force ourselves to verify every aspect of quality during each sprint and to the best of our ability keep the risk of unknown bugs to a minimum.

Our second option is to investigate this extra dimension and postpone testing some aspects of quality until a later sprint, consciously accepting the level of risk associated with that decision.

You’re eyes are not deceiving you; I did just hint at an approach to testing on an agile project whereby every aspect of quality isn’t tested every sprint. I’ve had this conversation with people face-to-face so I can imagine what some of you may be thinking.

“You’re transitioning from a waterfall model and you’re currently going through the ‘w-agile’ phase” 4

“You’re not taking advantage of test automation properly” 5

“You need to get testers involved earlier in the lifecycle”

The majority of comments I receive suggest that teams who delay some of their testing until a later sprint are suffering from a process failure that should be fixed by adopting a new way of working. But does every team that delays testing need to be fixed? Or are they already working successfully by deliberately applying a risk-based approach to testing based upon their available time, tools, skills and project’s tolerance to risk? Have they, quite rightly, adapted their testing to their context6?

What follows is an explanation of risk-based browser-compatibility testing on a Scrum project. I must stress that there are countless ways of applying risk-based testing, each with their own subtle twists and nuances. The approach below is one I have found useful in the past. I hope you find it useful in creating your own unique way of working.

Estimating Risk

At the heart of risk-based testing is the idea that we cannot test everything in every possible way and consequently some things must be tested less thoroughly and/or frequently than others. With this idea in mind, which browsers should be near the top of our testing to-do list and which browsers can be left untested until a later sprint, tested less frequently or tested less thoroughly?

To answer this question, we need to decide how to estimate the level of risk associated with each browser. The majority of techniques used to estimate risk are based upon two parameters. Wikipedia7 nicely combines these two parameters into the formula below. By introducing this formula, I am able to replace “risk” (a fairly vague and ambiguous concept) with two scales upon which I can more easily rank one browser against another.

Risk = (probability of the accident occurring) x (expected loss in case of the accident)

When it comes to estimating the “expected loss” caused by accidentally releasing a bug into the wild I like to use the popularity of a browser. I use this indicative measure because I believe a bug that presents itself in a high usage browser (such as IE8) is worse than a bug of similar priority that presents itself in a low usage browser (such as IE5.5). I also like this measure because of the correlation that typically exists between the number of users who enjoy a successful visit to a website and its reason for existing (direct product sales, lead generation or positive brand awareness, for example).

Whilst an important indicator, I rarely find a browser’s popularity is enough to prioritise it against its peers, especially when working with multiple browsers with similar usage profiles. As an example, think back a couple of years to when IE6, IE7 and IE8 each held a similar market share (~10%-15% in late 2009). Now image that you had been asked to test an AJAX intensive website with an extremely rich and interactive UI. If you were going to get a free cup of coffee for every P1 browser compatibility bug you found, but only had 30 minutes to test, where would you start? I’d choose IE6 every time and I don’t know many testers who wouldn’t (other than those who don’t like free coffee!). It is for this reason that I believe the “probability” of a browser compatibility bug existing is also an important part of prioritising the testing of one browser over another.

Unlike “expected loss”, I like to consider a variety of factors when estimating the “probability” of a bug being discovered in a website whilst using a particular browser. Examples include the age of a browser, the default browser used by the team and any similarities between the browser I am currently estimating and the list of browsers already marked as a high priority or already tested. My reasons for these choices are listed below.

I think about the age of a browser because both older browsers and cutting edge browsers often work in ways that are less frequently considered whilst designing a website.

I think about the default browser(s) used by the team because people will often stumble across (and fix) compatibility issues with their own browser, reducing the chance of finding compatibility issues with that browser during dedicated browser compatibility testing.

I also think about the similarities between the browser I am evaluating and the list of browsers that I have already deemed to be high priority or have already tested in a previous sprint(s). Why? Because if I have decided to thoroughly test Firefox 3.6.13, I am unlikely to find additional bugs if I test Firefox 3.6.14, compared to if I tested a completely different browser, potentially on an entirely different operating system, such as Safari 5 on a Mac OS.

Armed with this approach to estimating the potential impact caused by a browser compatibility bug and the probability of such a bug existing, I can rank the different browsers I want to consider for testing. At this stage, my preference is to use “high”, “medium” and “low” categories rather than absolute values. The table below contains example estimates for each browser. I must stress that such a table is specific to a particular project. To highlight this point, consider the range of browsers a website may encounter if it was built for public consumption versus another website built for internal use within a large corporate organisation. Before even starting to prioritise one browser against another, these two projects are likely to have very different lists of browsers that they intend to support.

Browser

Probability of Discovering a Bug

Potential Impact if a Bug Exists

Risk

Internet Explorer 8

Medium

High

High

Internet Explorer 7

High

Medium

High

Firefox 6

Medium

Medium

Medium

Internet Explorer 9

Medium

Medium

Medium

Safari 5

Medium

Medium

Medium

Chrome 13

Low

High

Low

Firefox 5

Low

High

Low

Internet Explorer 6

High

Low

Low

Firefox 3.6

Low

Medium

Very Low

Chrome 12

Low

Medium

Very Low

Firefox 4

Low

Low

Very Low

 

I sometimes find it useful to show the same information graphically to help me explain a particular test strategy to senior management or other members of the team. The same information as in the table above is also represented graphically below. I like this style of presentation due to the colours. If you have one or more browsers in the red or amber area then this is probably where you should turn your thoughts first. Browsers in the green area can then follow, as desired.

Risk-Based Browser-Compatibility Matrix

Sprint Planning and Release Planning

Now comes the tricky part. Based upon their associated risk, how often should each browser be tested and to what level of thoroughness? As I suggested at the beginning of this post, if a website will not be publically released until several sprints into the future, we have a greater number of options available.

At one end of the spectrum are projects that set aside resources every sprint to test every browser for incompatibility bugs, ranging from the blatantly obvious to single pixel misalignments. This is an extremely risk averse, but also very costly approach to understanding what browser compatibility bugs exist. Someone related to this project is likely to have high expectations when it comes to browser compatibility.

At the other end of the spectrum are projects that do not perform any browser compatibility testing until the final sprint before a public release, at which point their testing is limited to traversing one or two paths using the top three browsers, whilst scanning for obvious visual anomalies. These projects are likely to have a high tolerance to risk (whether they have consciously chosen to accept it is another matter).

Most of the teams I work with find themselves somewhere in the middle, but typically not before a healthy series of arguments and counter arguments take place in favour of different testing profiles, each with their own level of risk. Below is an example of how I like to represent this information as a consensus is thrashed out within the team. In my experience, low-tech works best, so grab a white board and some markers and talk through the various options that are compatible with your available time, tools, skills and project’s tolerance to risk.

Testing Profile for Browser Compatibility Testing Across Multiple Sprints (Scrum)

Finally, remember that whilst this kind of testing profile deliberately spans multiple sprints, it is not intended to be fixed in stone during sprint 1 and religiously adhered to until the website is externally released. Truth be told, your original estimates are likely to be wrong, so be prepared to tweak the priority associated with each browser from one sprint to the next.

References

1 – An article written by James Bach on risk based testing.

http://www.satisfice.com/articles/hrbt.pdf

2 – A blog post by Alan Richardson (a.k.a. “the Evil Tester”) in which he describes how to apply one of the principles of lean (avoiding waste) to testing.

http://www.eviltester.com/index.php/2008/02/20/some-notes-on-software-testing-waste-my-lean-perspective/

3 – A good overview of agile testing by Elisabeth Hendrickson.

http://testobsessed.com/wp-content/uploads/2011/04/AgileTestingOverview.pdf

4a – A blog post by Rachel Davies in which she describes some of the signs that indicate a team could be following ‘w-agile’.

http://agilecoach.typepad.com/agile-coaching/2010/03/agile-or-wagile.html

4b – Another blog post about ‘w-agile’, this time by Rebecca Porterfield on the ThoughtWorks community site. In this post Rebecca takes the position that there is both ‘good’ and ‘bad’ ‘w-agile’. A belief I share based upon my own agile transformation experiences.

http://community.thoughtworks.com/posts/ffc85995c8

5 – A blog post by Mike Cohn in which he describes an approach to test automation that is frequently used on agile projects. Notice, similar to other test automation models proposed for agile projects, the focus is on ‘functional’ test automation. How are the non-functional aspects tested?  If ‘manually’ is the answer, this is where risk based testing can help.

http://blog.mountaingoatsoftware.com/the-forgotten-layer-of-the-test-automation-pyramid.

6 – An introduction to context-driven testing, at the heart of which is the principle that there are no testing best practices.

http://www.context-driven-testing.com

7 – The Wikipedia page for ‘Risk’.

http://en.wikipedia.org/wiki/Risk#Mathematical_formulations

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s