Matt Archer's Blog

  • Public Training

    Brighton, 26th March 2014


    Brighton, 27th March 2014

  • New Book

    I'm working on a new book. View the home page for the 'Tips for manual testers working in an agile environment' book

Popular Posts

Techniques for Agile Manual Testers Banner

New Book: Tips for manual testers working in an agile environment

A few months ago I wrote a series of blog posts inspired by some of the topics from the Techniques for Agile Manual Testers course that I have been running with the Ministry of Testing and Pretty Good Testing.  At the time, I arbitrarily decided to write ten posts, but I always knew there was material for many more.  With this mind I have decided to write another 40 tips, but this time rather than publish them on my blog, I plan to use LeanPub to publish all 50 tips as a short book under the title of “Tips for manual testers working in an agile environment”. One of the ideas behind LeanPub is to publish a book gradually over time, which I fully intend to do.  If you are interested to read the book as I write it, check out the landing page for the book on LeanPub and purchase the book whilst it is still cheap (currently $1.49) but obviously incomplete (currently 15 tips) [...]

Tips for manual testers working in an agile environment (blog series)

Those of you that are familiar with my blog will know that the majority of my posts tend to be quite lengthy. I like writing this way, but I have decided to break from the norm and publish a series of shorter posts that focus on a question that I encourage manual testers working in an agile environment to frequently ask themselves. That question is… How can I provide meaningful, quality related feedback, faster through predominantly manual, human-driven, activities whilst maintaining independence, diligence and predictability?  The question is deliberately process agnostic and designed to encourage fresh ideas and creative thinking rather than point towards a specific set of guiding principles. That said, I regularly find myself making similar recommendations to different teams and it is these recommendations (or tips) for manual testers working in an agile environment that I plan to share here. [...]

Test Automation Polarities (from my TestBash talk)

Last week I spoke about balancing test automation techniques at the TestBash conference in Brighton (England). Judging by the tweets that welcomed me as I walked off stage, my primary message was well received. That message was that teams should select and tailor their approaches to test automation so that they match their specific needs (context). As part of speaking about this topic, one of the slides I included contained a table of what I called ‘test automation polarities’. A table inspired by James Bach’s list of exploratory testing polarities, which can be found on page 4 here. The way I describe this table is as a “thinking framework” or as “an aid to our intuition” (at least that is what I wrote in my speaker notes!). [...]

15 controversial testing tweets from #CAST2012

I wasn’t able to attend the Conference of the Association for Software Testing (CAST), but I did follow the event closely on Twitter (#CAST2012). During the event, a few times each day, I browsed the #CAST2012 Twitter stream and added any tweets that caught my eye to my list of favourites. When the conference ended, I looked at the tweets I had within my list of favourites and something stood out. To many of the Testers, Test Managers and even Heads of Testing that I meet, a large majority of the tweets would be seen as controversial. Not because they are crazy ideas (far from it, at least from my perspective), but because they go against what many people within testing still believe are our “best practices” [...]

Help keep your bug count low by running a bug ‘luck dip’ after you daily stand-up

I remember being excited as a child when the opportunity of a ‘lucky dip’ presented itself. The game was simple. A selection of prizes would we wrapped in gift paper and hidden in a barrel of foam beads. One at a time, each child would then ‘dip’ their hand into the barrel and pull out a prize. The prize each child received was solely based on luck. The only certainty was that there was a prize in the barrel for everyone.  With a bug lucky dip, team members perform a similar ritual, the only difference is that there is rarely a barrel involved and team members are rewarded with a specific kind of prize; a bug to fix! [...]

Test Case Design with Classification Trees (Sample Book Chapter)

I recently started work on a new book. The title is still to be finalised, but the subject is clear; a practical look at popular test case design techniques. In this modern age of testing, you may be wondering why such a traditional subject needs a new book and that I would be better writing about my experiences with testing in an agile environment or test automation or exploratory testing. Without doubt these are print worthy topics, but I believe that the best people at performing these tasks are those with a solid understanding of test design and it is for this reason that I wanted to first focus on this topic. [...]

Why every agile software team should fix bugs as soon as they find them

If a bug is left to fester in the software you are developing, configuring or maintaining it may camouflage other bugs, demotivate the team by suggesting quality isn’t important, become the topic of pointless conversations, cause duplicate effort, lead to incorrect project metrics, distract the project team, hinder short-notice releases, invalidate estimates and lead to unnecessary frustration. And the longer you leave a bug before fixing it, the more likely these things are to occur and to a greater extent [...]

Does a Scrum team need an Agile Test Lead / Test Manager?

‘Agile Test Lead’; it sounds like a reasonable title, but it could be interpreted as going against agile principles. ‘Test’ smacks of being role specific. Not the kind of separation you want to encourage if you are aiming for a team of multi-disciplined individuals. And ‘Lead’. It doesn’t sound as bad as ‘Manager’, but it still suggests an element of hierarchy, rather than a self-organising team of peers. So why when I Google ‘Agile Test Lead’ does it return so many hits (369,000 to be exact). I haven’t invented it as part of writing this post and it’s obviously more pervasive than a few old-school software houses clinging onto their roots whilst also trying to embrace agile principles. So what is an ‘Agile Test Lead’, if I was hiring one, what would I expect them to do and is it a role that will stand the test of time? [...]

An Example of Risk-Based, Cross-Browser Testing on an Agile Project

In its heyday, risk-based testing was the buzz-word of the industry; every tester was talking about it and every project had it as part of their testing strategy (even if it was only given lip-service in a superficial strategy document somewhere). Today, it is all about being lean and agile – testing early, testing often and having a zero tolerance to defects. It’s a good way of working, but have the guiding principles associated with agile software development made us lose track of one of the most important facts of software testing? That ensuring quality by testing everything is impossible. That even at its best, testing is a risk reduction exercise and different projects have wildly different tolerances to risk. [...]

Sharing (Behaviour Driven Development) BDD specifications between testers and developers using StoryQ

From what I’ve seen, teams that decide to follow an agile way of working quite quickly manage to adopt the frequently cited agile practices, apart from one; the multi-skilled team. As a result, rather than working in a way where anyone can (and is encouraged to) do anything to help the project succeed, many projects still have what they refer to as “developers” and “testers”. Don’t get me wrong, from what I’ve seen this isn’t an agile adoption killer, but it does bring with it some challenges in terms of ownership / responsibility (call it what you will), around some of the newer agile practices where things are still evolving, like Behaviour Driven Development (BDD). [...]

How test automation with Selenium or Watir can fail

I like automated tests.  They’re typically quick to run and assuming they acquire the correct actual and expected result they rarely fail to produce an accurate comparison.  Don’t get me wrong, I’m no fanatic, but I have seen enough success to know that the right tool, in the right tester’s hand, can be a valuable addition to a project. That said, I can see how test automation can fall out of favour within a project or organisation.  Tools have changed significantly over the years, however, for many projects their tool’s knack to do just about anything ironically remains its greatest strength and weakness.  Even if a project decides to use one of the latest tools for their particular technology, such as Selenium or Watir for GUI-level web testing, it is easy to erode the expected benefits if either tool is used inefficiently (something that is easier to do than you may think). [...]

The Testing V-Model Catch-22

I rarely meet a tester that has never heard of the V-Model. It’s one of the parts of the software testing body of knowledge that every tester seems to know about, even if it is just by name.  I’ve seen the V-Model used in two different ways. One approach is to use the V-Model as an extension to a waterfall software development lifecycle, where the “V” is perform just once. The other approach is to use the V-Model as an extension to an iterative software development lifecycle where lots of mini-”V”s are performed, one per iteration.  Regardless of how you apply the V-Model (just once or iteratively) the prescribed sequence of testing levels on the right-and-side of the “V” (Unit Testing -> Integration Testing -> System Testing -> Acceptance Testing) can encourage a project to work in a way that may not be in their best interest. [...]

How Much Detail Should I Write In My Test Case / Test Script? (A Thinking Framework for Context-Driven Test Documentation)

As part of continuing to prepare my slides for the Software & Systems Quality Conference I have dedicated some time recently to thinking about the factors that affect our written communication needs as testers, and consequently the level of detail and information that should be included in our test documentation, specifically our Test Cases.  As the title of my talk suggests (“A Thinking Framework for Context-Driven Test Documentation”) I do not believe that there is one perfect level of written detail for a Test Case – Far from it. Instead, each Project has different written communication needs, priorities and working constraints when it comes to deciding the level of written detail for their documentation. In fact, I’m happy to take this idea further and say that even different Test Cases on the same project may benefit from being written at different levels of detail. I’ll explain why [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s