Applying the principles of Scrum to testing
Posted by Matt Archer on September 4, 2008
When I first started thinking about my presentation for the Software & Systems Quality Conference I found myself drawn to creating a huge list of potential testing documents a test team can create (many of which are ‘management’ and ‘planning’ type documents). Whilst I had initially intended to include in my presentation a suggestion of how to avoid falling into the trap of having too much documentation, as I started creating the slides I released it didn’t belong as part of the presentation as it didn’t match the abstract.
So hence the reason for this post, as one of the ways a test team can reduce their test management documentation and the overhead of test management is to take some of the principles of Scrum and apply them from a testing perspective.
One of the things I like about Scrum is that it’s just the skeleton of a process, which means that whilst it was originally published as a way of managing software development projects, it can easily be scaled or shrunk to manage any creative endeavour.
What this means is that a software development project can be managed using the principles of Scrum, a program of software development projects can be managed using the principles of Scrum, and at the other end of the spectrum, a single discipline within a software development project (such as testing) can be managed using the principles of Scrum.
This leads to the interesting questions… Does the entire project team need to commit to using Scrum or can a single discipline, like testing, break off on its own and follow the principles of Scrum in isolation? I haven’t thought it through for all areas of software development, but certainly for testing, the principles of Scrum can be used regardless of whether the rest of the team (or organisation) wants to work in an agile way or not.
As we know, however, Scrum is just a skeleton so we need to make some decisions about how we are going to put some testing meat on those Scrum bones. On their website, the Scrum Alliance provide an excellent description of the three artifacts that Scrum recommends so rather than reinvent the wheel and describe how Scrum work, I’ve quoted snippets from the Scrum Alliance website and then explained how that specific artifact can be used from a testing perspective.
The Scrum Alliance says; At the beginning of the project, the product owner prepares a list of customer requirements prioritized by business value. This list is the Product Backlog, a single list of features prioritized by value delivered to the customer. The Scrum Team contributes to the product backlog by estimating the cost of developing features…
For starters, let change the name. We’re using Scrum to manage our testing effort, not build a product, so lets call it a Testing Backlog, rather than a Product Backlog.
The next think to do is to look deeper into the description. For me, the key terms are customer and value and to use Scrum to help us manage our testing effort we must decide what customer and value mean from a testing perspective – otherwise how will we know what to include in our Testing backlog? My preferred option is to view the development team as the customer of testing, as this way of thinking works well for both internal and outsourced testing teams, as well as complementing the most commonly encountered definition for Agile Testing. If you’re working in a small multi-skilled team, this means that you may end up being the customer of yourself, which can be a bit confusing, but it all still works 🙂
That was the easy part – identifying the customer. Now comes the harder part of deciding what we are going to add to our Testing Backlog. The key here is to remember that our Testing Backlog is an outward facing artifact and therefore should be focus on the value the test team will provide. For this reason, things like prepare test cases, setup test environment and write test evaluation summary, would not be good candidates for inclusion in our Testing Backlog. Instead, prove compatibility with IE7, compare performance to industry benchmarks and validate system against documented requirements would be much better as they focus on the value provided by testing to the rest of the development team.
Once the Testing Backlog has been populated, the test team can use their specialist knowledge to add high-level estimates. Testing Backlog items that are proving too difficult to estimate or have been given huge effort estimates can of course be split to make life easier. If after splitting a Testing Backlog item it is still proving difficult to estimate, consider amending it to include more specific objectives, evaluation criteria and the test environment to be used. If you are part way through your project and already have some existing test cases (manual or automated) that you have run before, agreeing which ones will be run again and/or the approach used to identify new test cases can also help clarify a Testing Backlog Item.
One final important point to stress is that the Testing Backlog should be updated as more information is discovered about the services required from the test team and as the project progresses. If you created your Testing Backlog at the start of your project and haven’t updated it since, you’re probably missing some important Testing Backlog item that should be added asap.
The Scrum Alliance says; …the Product Backlog’s features are broken down into a Sprint Backlog: a list of the specific development tasks required to implement a feature…
Similar to before, a change it terminology would be useful so let’s call our Sprint Backlog a Test Cycle Backlog. As you can see from the description, the main difference between a Product Backlog and a Sprint Backlog is the Product Backlog contains the features to be implemented whereas the Sprint Backlog contains the detailed tasks required to implement one or more of those features. Another way to look at it is the Product Backlog contains the what and the Sprint Backlog contains the how.
As our Test Cycle Backlog is an internal facing document (a task list for the current Test Cycle, if you like) it will contains all of the tasks the test team need to complete to design, execute and provide feedback about one or more of the items in the Testing Backlog. For this reason, in a Test Cycle Backlog I would expect to see tasks like prepare test cases, setup test environment and write test evaluation summary, as these are detailed tasks the test team will perform.
The Scrum Alliance says; The Burndown Chart shows the cumulative work remaining in a Sprint, day-by-day. … When tasks are completed as the Sprint proceeds, the ScrumMaster recalculates the remaining work to be done and the Sprint Backlog decreases, or burns down over time. If the cumulative Sprint Backlog is zero at the end of the Sprint, the Sprint is successful.
Luckily for the Burndown Chart, it doesn’t need a new name as it’s existing name is generic enough to apply to the majority of scenario, including visually communicating the progress of the test team during a Test Cycle. When using this technique from a testing perspective you can get some quite skewed results as you may find items on the Test Cycle Backlog vary in the amount of effort required to complete them. Consequently, your Burndown chart will tend to ‘jump’, rather than follow a smooth path down to 0%.
So combined together these three artifacts can form the basis of a lightweight and simple approach to test management. It’s unlikely to fulfil 100% of everybody’s needs, but that’s kind of the point. It’s a simple place to start and easy to expand upon if you need to.