Matt Archer's Blog

  • New Book

    I'm working on a new book. View the home page for the 'Tips for manual testers working in an agile environment' book
  • Enter your email address to receive notifications of new posts by email.

  • Subscribe

How Much Detail Should I Write In My Test Case?

Posted by Matt Archer on July 24, 2008

As part of continuing to prepare my slides for the UK Software & Systems Quality Conference ( to be held in London on the 29th of September, 2008 ) I have dedicated some time recently to thinking about the factors that affect our written communication needs as testers, and consequently the level of detail and information that should be included in our test documentation, specifically our Test Cases.

As the title of my talk suggests (“A Thinking Framework for Context-Driven Test Documentation”) I do not believe that there is one perfect level of written detail for a Test Case – Far from it. Instead, each Project has different written communication needs, priorities and working constraints when it comes to deciding the level of written detail for their documentation. In fact, I’m happy to take this idea further and say that even different Test Cases on the same project may benefit from being written at different levels of detail. I’ll explain why…

There are many different factors that affect the level of written detail we should include in our Test Cases. Sadly, there isn’t a magic formula that can take each of these factors and then provide a recommendation for the ideal level of written detail with a sample template and example – wouldn’t that be cool 🙂 . However, whilst no magic formula exists, these factors can still provide excellent food-for-thought and help us consider what is and what isn’t the right level of detail for us, our project, our co-workers and our organisation.

I have grouped the factors into three categories. I’m sure they could be divided differently and presented from a different perspective or different slant, but this feels like a good separation as a starting point.

  1. Factors why we may consider writing a more detailed Test Case.
  2. Factors why we may consider writing a less detailed Test Case.
  3. Factors to help us decide the time frame within which to consider (1) and (2).


1. Factors why we might consider writing a more detailed Test Case

(Evidence of Test Coverage)

Sometimes we need to provide evidence of the Test Cases we run and the aspects of the application we interacted with – I tend to think of this as evidence of our Test Coverage.

There are a few reasons why we may need to provide evidence of our Test Coverage, but typically these reasons are related to regulatory, contractual or governing requirements to report this information to a party outside of our project team. The level of reporting detail can obviously vary, but typically the greater the level of Test Coverage detailed required, the more detail we should consider including in our Test Cases. We also need to check that the person who is executing the Test Case is following what is documented, but that’s another story.

(Future Automation)

Automated testing (in the right situation) is great, but sadly, unlike humans, computers can’t read between the lines – They have no knowledge, other than what we tell them. This means that if a Manual Test Case is to be converted into an Automated Test Script and executed by a computer then all interactions with the application we are testing must be clearly understood.

It is at this point that different schools of thought divide, but if you believe that Manual Test Cases make good candidates for automation (say for regression testing purposes) then adding more detail to your Manual Test Case can make it easier to convert into an Automated Test Script in the future.

(Necessary Precision)

One of the most common tactics we apply as testers is that of equivalence. We frequently take the “million” different ways that a user can interact with an application and group those interactions into categories that (from a testing perspective) can be considered equivalent. Whilst not always true in practice, this tactic of equivalence allows us to use a smaller set of Test Cases (many less than the original “million”), yet still provide a useful assessment of the quality of the application we are testing. What this means is that some Test Cases can be written in a general way (sometimes refereed to as an Abstract Test Case) and it is then the task of person executing the Test Case to create a specific example on-the-fly. The easiest way to explain this is by example…

Imagine the person documenting the Test Case writes, “Check you can login with a valid username and password”. The person who executes this Test Case (who may be the same person who wrote it) then read this instruction and thinks, “ok, on this occasion, I’ll use the test account TestUser1, with the password Testing123”. This approach of identifying specific examples of Test Cases on-the-fly can be extremely useful, but why in this example was the person who executed the Test Case able to perform a success test by turning something described in a general way into something specific? The reason is because (in this example) generalising did not take anything away from the test – the purpose was still clear and in theory the same assessment of quality has been made.

So we now get round to the idea of necessary precision. There are some Test Cases that if we generalise, we will lose the purpose of the test. If it is important that a specific input value, data file, sequence of action, etc, is used as part of interacting with the application we are testing, I would recommend that this information is included in that Test Case, which in turn typically increase the level of written detail. I refer to this information as necessary precision, as without it the Test Case would not make sense, lose it’s true purpose or could be misinterpreted. Be careful, however, to differentiate between necessary precision and unnecessary precision, as it is very rare that all Test Cases need to be written to include this exact information.


“Control” is a dirty word in some circles. It conjures thoughts of a military style of management and an environment where creativity is stifled. I wouldn’t necessarily agree that a high level of control stifles creativity on a project. Instead, I prefer to think that a high level of control in some instances is a good thing as it enables a team to avoid some of the problems that can be encountered as a result of decisions being made in isolation and/or decisions being made by someone with insufficient knowledge or experience.

There are a number of reasons why we may chose to manage the creation and/or execution of a Test Case (or a group of Test Cases) with a high level of control. Maybe our project has access to only one skilled tester, but has access to a group of non-testing resources. In this instance, we may consider giving the control to the skilled tester, who would create the Test Cases and then supervise their execution by the non-testing resources. As another example, maybe we are the testing lead on a project and know that the impact of missing a defect for a particular feature is inconceivable, so we decide to retain an element of control over the Test Cases that the testers are creating by reviewing them to ensure they complement each other and are in-line with the overall approach.

As I’m sure you can imagine, neither of these example scenarios would be possible unless the Test Cases involved had been written to a high level of detail. What this means in general terms is that if somebody wants to have a higher degree of control over the Test Cases that the team creates and/or executes then those Test Cases typically need to be written at a higher level of detail.


Whenever I think about repeatable testing, one of the first things that come to mind is James Bach’s comment from his paper Test Automation Snake Oil.

“Highly repeatable testing can actually minimize the chance of discovering all the important problems, for the same reason stepping in someone else’s footprints minimizes the chance of being blown up by land mine.”

Whilst this is true in some circumstances, there will be also be a number of occasions when we want to perform a Test Case exactly the same as the last time it was executed, maybe for comparison purposes, for example. Now “exactly” is a dangerous word when it comes to testing as even the smallest change (time of day, for example) can cause a bug to appear or disappear, however, it is still reasonable to aim for some level of repeatability. If this is our aim then typically as our repeatability requirement increases so will the level of written detail in our Test Case. We just need to be careful of James’ Snake Oil if we do!


2. Factors why we might consider writing a less detailed Test Case

(Application Knowledge)

Regardless of whether we are planning to execute a Test Case our self or we are writing a Test Case for someone else to execute, typically the greater the knowledge the person executing the Test Case has about the application they are testing, the less detailed the Test Case needs to be. In fact, in my experience, too much detail can distract someone with good application knowledge and slow them down.

If we chose to reduce the level of detail in our Test Case based upon this factor, then the person who executes the Test Case must be prepared to be disciplined. Just because they know the application inside-out and can navigate it in their sleep, it doesn’t mean that they can test it in their sleep too 🙂 . With familiarity can sometimes come complacency, so the person executing the Test Case must be disciplined and use their same eagle-eyed skills on this application as they would any other.

(Other Documentation)

Whilst a Test Case is actually an abstract thing and could be realised in many different ways and in many different forms, when we write them down we tend to capture them in a single document, separate from anything else. A habit it is then easy to fall into is to write a Test Case as if it is the only piece of documentation that the project owns that relates to how that piece of the application we are testing should work. In many circumstances, this isn’t the case. We often have access to requirements document, support documents, training documenting and many more, so the question then comes, if the information is captured perfectly well in other documentation then why re-write it in our Test Cases?

For me this is not only a poor use of time initially, but also a commitment to a poor use of time in the future as our Test Cases are likely to need updating when the original source changes and the Test Case become out-of-date. For these reasons, I find that other documentation is a factor that should be considered when asking “how can I write a less detailed Test Case?”

As with many of these factors, however, a degree of balance must be met. Imagine trying to execute a Test Case that said, look here for this, and there for that, and check the results is this location – It is unlikely to be a pleasant or productive experience. With this in mind, whilst a fully normalised set of documentation may be elegant from an academic perspective, it is unlikely it will to make our life easier from a practical one.

(Test Analysis Experience)

As a community, we share a collection of Test Analysis techniques. Some of these techniques are more well-known than others, but once a technique is established (and I use the term established loosely, as this works as soon as just two people know it!) it provides us with a common language and allows us to discuss and present testing ideas in a more succinct way.

Imagine we want to test that a field in an application correctly processes acceptable values and reject values outside of that acceptable range. Boundary Value Analysis is one of the techniques we could use to approach this kind of testing challenge and assuming the acceptable range in our example is continuous and is sandwiched between two unacceptable ranges (one on each side) then by applying this technique we would end up with 6 Test Cases.

We could document this example as 6 different Test Cases, but if both the person writing the Test Case and the person executing the Test Case understand the principle of Boundary Value Analysis we could save our self both time and effort by including the technique in the Test Case and writing something like “check that field ABC can correctly process acceptable values and reject values outside of that acceptable range, using the Boundary Value Analysis Technique. The acceptable range is 1 to 10 (inclusive).” This allows us to get 6 Test Cases for the price of 1 and provide us with an opportunity to reduce the level of detail in our Test Cases.

Whilst this factor can be a useful one, it will only work if everyone we plan to share our Test Case with knows the technique. For this reason, it tends not to be as useful when we want to share our Test Cases with people outside the testing group or for UAT purposes, for example.

(Goal Focused)

When it then comes to testing, a question we should probably ask ourselves more is, what does it mean to pass this test? What is it we are trying to check? Whenever I think about this question myself, an idiom that often pops into my head is “a means to an end“. The Cambridge International Dictionary of Idioms defines “a means to an end” as something that you are not interested in but that you do because it will help you to achieve something else. This idiom can apply to someone using a piece of software. They perform a sequence of interactions, but they’re not really that interested in the steps they are performing until right at the end when the reason for using the software (the goal) is achieved.

As an example, imagine using your Internet banking. You login (“means”), you navigate to your current account (“means”), you select the make-payment-option from the menu (“means”), you enter the amount to transfer and the account to transfer it to (“means”), you click the make-payment button (“means”) and then finally an acknowledge is shown saying the transaction was success (“goal”). So whenever we create a Test Case, we should ask ourselves, what am I trying to check – the “means” or the “end”?

If we decide we want to focus our test on a specific goal (an “end”) then we can write our Test Case accordingly. This doesn’t mean that we have to remove all mention of any steps that describe the “means”, but it does mean we can use the minimal description necessary to help the person executing the Test Case remember how to prepare or navigate the system before they reach the crunch point of the test (the goal).

Going back to the Internet Banking example, if we decide to focus on the goal then we could document the Test Case something like, “Check that you can transfer money from one bank account to another”. This being testing, however, we also need to think about the corner cases and how the application handles incorrect actions by the user – I guess we could call these failed goals. My recommendation here is to continue to remain focus on the goal (or failed goal, in this case) and document the Test Case accordingly, for example “check that you cannot transfer money from one bank account to another, if the source account does not have sufficient funds”. As you can see, the Test Case is still focused on the goal, not the “means”.

All of this said, there will be times when we do want to consider the “means” as important as the “end”, like if we are testing the automation of an existing manual work-flow that needs to be exactly replicated, but remember it is likely that by including the “means” as well as the “end” it will lead to a Test Case written at a greater level of detail.

(Bug Reporting Experience)

One reason for writing Test Cases to a high level of detail is so that if a bug is found during test execution, a link can be placed in the bug report to the Test Case that helped discover it and then subsequently that same Test Case can be used to check the fix has been made correctly.

This isn’t my favourite argument for more detail. In fact, I’d rather look at it from the perspective that if the person executing the Test Case is an experienced tester, is used to keeping track of their interactions and it capable of writing a good bug report (including a good set of reproduction steps) then this is an argument for less written detail in our Test Case.

(Available Time and Resource)

Writing Test Cases takes time and effort. Time and effort well spent? Well, that depends on the Test Cases we write and the tasks we postpone in favour of writing them. It is important to remember that documenting Test Cases is only one aspect of testing which contributes towards our ultimate goal of providing the most informative assessment of quality, given the time and resources we have available.

How the availability of time and effort relates to the level of detail we write in our Test Cases is because of the correlation between the amount of detail written in a Test Case and the time it takes to write it. Imagine that you are working on a project and have 20 days to perform the testing. You estimate how long it will take to document your Test Cases based upon your ideal level of written detail and it works out to be 18 days. You now have a decision to make. What is more important? Is a collection of well-written Test Cases more important, or are the tasks being postponed in favour of writing those Test Cases (like test execution, for example) more important? For me, in this example, as a gut-feel, 18 days is too much time to dedicate to documentation and for no other reason other than a lack of available time, I would look for ways to reduce the target level of written detail in my Test Cases.

I’ve never seen the sense in dedicating more time to test documentation then you can afford, as for me, an assessment of quality that is steered by some briefly documented (maybe too brief in some places), yet well considered and structured Test Cases is better than a perfect set of documentation that the team is unable to turn into reality because they have run out of time.


3. Factors that will help us decide the time frame of our investment

Whenever we test, our ultimate goal is to provide information about the quality of the application we are testing. We do this by interacting with an application and then reporting anything that seems unusual, suspicious or just down-right broken. Sometimes we document these interactions before we perform them, sometimes we document them after and sometimes we don’t document them at all. If we decide to document our test, this takes time. Time we are not spending interacting with the application we are testing and more than likely time we are not using to improve our understanding of the application’s quality. So when we document Test Cases we are making an investment and this investment should be considered over a period of time. Below are some factors that can help us decide the time frame of our test documentation investment. These should be considered alongside the factors that affect our decisions to write more or less detail in our Test Cases, as discussed above.

(Test Longevity)

Sometimes it is easy to forget that not all Test Cases are born equal. Some are run over and over again, possibly as part of a regression test pack, where as other are only ever run once, maybe because it would be too expensive or too time consuming to run again, or maybe because we have the quality information we need and we do not believe it is worth recapturing it in the future.

What this means is that some Test Cases will only ever be executed once compared to others that will be executed multiple times, so in a similar way to other things in life it can be useful to think of some Test Cases as single-use or disposable and others being life-long tools. These examples are obviously at different ends of the spectrum in terms of the number of uses, but all Test Cases fit on this scale somewhere and for this reason considering the lifespan of a Test Case when thinking about the amount of written detail can be useful.

As more detailed Test Cases take a longer time to initially write and then maintain, this typically means that Test Cases with a short lifespan are written to a lower level of detail so that they make sense from a ROI perspective. This doesn’t necessarily mean that we should write Test Cases that we plan to run several times to a high level of detail, but it does typically mean that we will get a greater return from the time we invest in that Test Case and consequently have the luxury of considering a bigger investment in its documentation.

(Application Longevity)

Different applications have different lifespans too. Some software we write will only ever be used once, maybe to migrate information from one system to another, for example. On the other hand some system will last for much longer (we hope) and be used day-in, day-out by an organisation for years.

We obviously have to take into account other factors, like the ones listed above, but if an application has a short lifespan then I believe we should have a good reason to spend a large amount of time documentation detailed Test Cases, compared to if the application we are testing is going to be around for years. From my experience, if the lifespan of the application is short, it wont just be testers that are looking to save money, but instead everyone on the team will be looking to work in a more lightweight way.

(Application Volatility)

Some application development projects start with a good set of requirements that have been confirmed by the stakeholders and change very little as the project progresses. I can’t remember the last time I was on one of these projects, but I’m sure they exist 🙂 .

At the other end of the spectrum are project teams that are working towards a target product that is changing every day. From my experience, the volatility of an application’s requirements increases due to a number of factors, but two that rate highly on my list are the number of stakeholders involved and the level of innovation required. Of course, if a set of requirements have already been built, these also increase the volatility of the application itself.

From working with other testers over the years, one of the things that I’ve discovered I’m not alone in disliking is maintaining existing Test Cases. We’d happily do anything else if it meant we didn’t have to update existing Test Cases to reflect changes made to the requirements or the application we are testing. It doesn’t mean that we don’t update our Test Cases, it just means we don’t like doing it!

One way to sweeten this task is to reduce the time we spend doing it. We can do this by not documenting our Test Cases to a high level of written detail if we know that there is a good chance that the requirements for the application we are testing are going to change in the foreseeable future.

When we think about changes, it is worth grouping them into categories to help us assess the impact on our Test Cases. Two categories that are useful from a testing perspective are “visual changes” and “functional changes”. Just because an application changes visually, does not necessarily mean that it has also changed functionality, and vice versa. This means that if we can identify which aspects of the application are most volatile we can think about smart ways to reduce the number of times they are mentioned in our Test Cases and reduce our maintenance work accordingly.

Ultimately, however, if the application completely changes then our Test Cases will have to completely change too. For this reason, when considering the life of a Test Case and the amount of time we want to invest in documenting it, we should always consider the volatility of the application and its requirements.


4. Final Thoughts

So to sum things up, the ideal level of written detail for a Test Cases depends on many factors and these factors have different priorities depending on the Test Case we are writing, the project we are working on and the people we are working with.

If I had to provide one tip, it would be to start with a low level of detail because you can always add more later! And later can mean after a few days, or after we have learnt something new about the application, or after the first time we execute the test, or after we find a specific bug. All of that said, my favourite “later” is when “later never comes” 🙂 .

I’m sure most of these points will make their way into the final presentation. I’ll upload the material once it’s finished, but if you want to listen in person, you can register here or visit the main conference homepage here.


6 Responses to “How Much Detail Should I Write In My Test Case?”

  1. LizM said

    Wise words 🙂 The ‘snake oil’ link is particularly useful as someone was recently here talking up automated testing – a case of reckless assumptions I fear 🙂
    Automation is a fine idea and absolutely the right way to go – if only it worked! Well, automation can work, but I’ve got a feeling that it’s of limited benefit when a system is subject to a large amount of change (i.e. early in development). Post development when you move to lower change instability then automated testing is valuable for regression test efficiency.
    We’ll be reading about the snake oil with great interest here ;o)

  2. […] How Much Detail Should I Write In My Test Case? […]

  3. Bruce said

    Hi Matt,

    This is a great analysis. Thanks for posting it.

    I also agree with Liz that automated testing is still very much a holy grail.

    Automation for unit testing works well and can be achieved with relative low cost. In fact it is a good idea to write the unit tests before the actual code.

    Automated testing of user interfaces is slowly getting there. There are a number of tools available to test both native GUIs and as well as web applications. Unfortunately these take almost as much effort to program as the application itself. Changes to the application can invalidate large blocks of automated test scripts very quickly.

    The real problems start when you need to test across multiple integrated systems to validate the end-to-end business processes. Very few tools even allow for testing across multiple UIs and APIs so you end up having to write an interface for the test automation tool as well.

  4. Hi Bruce, Liz, thanks for you comments,

    I know what you mean about GUI level test automations. There are a lot of barriers to adoption and some good reason why you just wouldn’t consider it in the first place.

    I was reading Brian Marick’s blog the other day about Barriers to acceptance-test driven design which I think highlight (even with a more agile spin) that GUI (or just below GUI) level automation isn’t something to rush into.

  5. Kobi Halperin. said

    Better late then never…
    So for the sake of future readers of this great post, I will add another point in favor for elaborating test case description:
    Some times, you gain knowledge just by writing the details, and verifying the feasability of executing the test case, the required equipment and knowledge for its execution and so on.
    without this visualisation, some people can’t grasp the whole meaning of the test case while desigining & reviewing it.
    So in some cases, one would gain from writing a sample test case per N cases, just to verify their feasability.

    @halperinko – Kobi Halperin.

  6. zsquare said

    Great article , the level of details you provide in your test case determine the level of coverage you want to achieve . I will prefer more details so that when bug occurs , it is easy to spot them out from the test case instead of burry in a long test case that can become confusing .

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s