Matt Archer's Blog

  • New Book

    I'm working on a new book. View the home page for the 'Tips for manual testers working in an agile environment' book
  • Enter your email address to receive notifications of new posts by email.

  • Subscribe

Rational Functional Tester: Using and Maintaining Automated Tests

Posted by Matt Archer on October 28, 2008

This post is part 5 of a series of 5 posts about test automation with Rational Functional Tester.  I originally created the material as a white paper in 2004 when I was coaching a project that was new to GUI-level automation and also to RFT.  I recently discovered it during a clean up and whilst some on my ideas have changed over the past 4 years, I’m happy that most of it still holds up as good advice.  Maybe one day I’ll update it.

The other posts in this series are (Part 1) Planning Test Automation, (Part 2) Test Automation Architecture, (Part 3) Creating Automated Tests and (Part 4) Reviewing Automated Tests.


Review Release Notes for New Build
Before the automated testing solution is used to verify the quality of the application under test the test team should review the release notes associated with the specific build or release.  This should identify both the changes that have been made to the application and any unfixed defects that were discovered during previous test cycles.  By having this information accessible to the test team, any known defects discovered by the automated tests can be ignored.

Setup Test Environment
Before any automated tests are executed the test team must reset the test environment to a know state.  This may be performed manually or may be automated as part of the automated testing solution.

Run the Automation
Automated tests can be executed at anytime of day or night.  If the automated test scripts are to be run overnight the test team must consider how the execution will be scheduled.  Typically, the simplest approach is to utilise the IBM Functional Tester command line interface and the scheduler built into the local operating system.  Regardless of whether the automated tests are executed manually or scheduled, the test team must consider how the automation will affect other test environment stakeholders, such as other test teams.

Analyse Failures
Once all of the automated test scripts have been executed the test team must analyse the test log and review any test failures.  A test script failure can fall into three different categories.  (1) The test script failed because the test environment was not reset correctly and consequently the application under test was not in the expected state.  (2) The test script failed because the test script itself contained an error.  (3) The test script failed because there is an error in the application, which must be investigated.  As the test team becomes more experienced in automated testing they will appreciate that the greater the amount of information and failure details contained within the test log, the easier this task is to perform.

Mark for Redevelopment
If a test script fails and the test team identifies the problem to be an error in the automated test script, the test script should be marked for redevelopment but not immediately fixed.  Unless fixing an error within a test script is necessary to complete the remainder of the test cycle, any fixes to the automated testing solution should be made within the subsequent automated development iteration and the application under test should be validated using the original manual test script.  Following this approach will allow the test team to remain focused on the primary objective or the test cycle (verifying the quality of the application) rather than becoming preoccupied with fixing error in the test scripts.

Log Defect
If a test script fails and the test team categorises the failure as a failure in the application under test then a defect log  should be entered into the project’s defect tracking database.  An a minimum each defect log should contain a brief headline that summarises the defect, a detailed description that explains how to recreate the defect in the application and a severity rating that provides an indication as to how important the defect is for the project team to fix.

Produce Test Evaluation Summary
At the end of any test cycle, the test team should create a test evaluation summary that describes the quality of the application.  Test evaluation summaries will vary from project to project, however a common set of topic exist.  (1) A subjective assessment – A brief description of the perceived application quality.  (2) Risks, Issues and Blockers – Anything that is preventing the test team from performing the testing effort.  (3) Progress against the Plan – Number of test executed, passed and failed.  (4) Defect Analysis – Metrics related to defect trends, defect densities and defect ages.  (5) Coverage Analysis – How the tests performed relate to project requirements, project risks and application code.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s