The Testing V-Model Catch-22
Posted by Matt Archer on December 8, 2008
I rarely meet a tester that has never heard of the V-Model. It’s one of the parts of the software testing body of knowledge that every tester seems to know about, even if it is just by name. I’ve added a picture that summaries the model below (Source: Wikipedia).
I’ve seen the V-Model used in two different ways. One approach is to use the V-Model as an extension to a waterfall software development lifecycle, where the “V” is perform just once. The other approach is to use the V-Model as an extension to an iterative software development lifecycle where lots of mini-“V”s are performed, one per iteration.
Regardless of how you apply the V-Model (just once or iteratively) the prescribed sequence of testing levels on the right-and-side of the “V” (Unit Testing -> Integration Testing -> System Testing -> Acceptance Testing) can encourage a project to work in a way that may not be in their best interest. This is easiest to explain if we just look at two of the testing levels, System and Acceptance. I’ve added definitions (taken from the ISTQB Glossary) for each term below.
System Testing: The process of testing an integrated system to verify that it meets specified requirements.
Acceptance Testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
My own interpretation of these terms and how they relate to the V-Model is that a group of testers perform system testing to checks that the application doesn’t contain any bugs, before one or more users perform acceptance testing to check they like the application and it’s going to truly help them accomplish one or more tasks quicker, more accurately, with more confidence, etc.
This always sounds sensible to me until I remember how much time most people spend system testing, how much users love to change their mind and how changing anything tends to introduce more bugs and the need for more testing. In a fairly extreme example, but a team could write some software, system test it to make sure it’s free of bugs, get a group of users to acceptance test it, who ask for 95% of the system to be changed… And then we’re back to square one!
This would, of course, be much more of a tear-jerking moment if you had chosen only to follow the “V” once in a waterfall style, compared to if you were performing many mini-“V” in an iterative pattern. However, even with many mini-“V” the team could still have wasted precious time and money system testing something that will never see the light of day.
So here’s the Catch-22 with the V-Model… Nobody wants to put a buggy piece of software in front of their users, but who wants to spend precious time and money system testings a feature that may be changed or removed?
If you interpret the V-Model as all system testing must be completed before acceptance testing can be started then the project is likely to run into problems. Either the application will be changed and the tester may feel like their original work was for nothing, or the application will remain unchanged and the users may feel like they didn’t get the solution they wanted. Neither of which are good.
Like with many Catch-22 situations, a compromise can be found if the rules of the game are loosened. The problem stems for trying to complete system testing and gold-plating the application before allowing users to perform acceptance testing. This is never going to work as the team will always end up changing something (even if it’s just something small) which need to be system tested again.
For me, the solution is not to try to finish system testing before acceptance testing begins, but instead perform as much system testing as necessary (no more, no less) to be confident that the application is of sufficient quality to be acceptance tested by the user(s). Once the application is then stable in terms of its features and how those features are realised the team can then focus on completing any outstanding system testing necessary to be confident that the application is of sufficient quality to release.
This means that a team should get into the habit of thinking about two different thresholds when it comes to an application being fit-for-purpose. (1) Is it fit-for-acceptance testing, and then (2) Is it fit-for-release.