Rational Functional Tester: Creating Automated Tests
Posted by Matt Archer on October 28, 2008
This post is part 3 of a series of 5 posts about test automation with Rational Functional Tester. I originally created the material as a white paper in 2004 when I was coaching a project that was new to GUI-level automation and also to RFT. I recently discovered it during a clean up and whilst some on my ideas have changed over the past 4 years, I’m happy that most of it still holds up as good advice. Maybe one day I’ll update it.
Prioritise Tests to Automate
An automated testing solution is most effectively developed using an iterative approach. By focusing on automating the tests that are most difficult to execute manually and those that will add the greatest value to the project team, a test team can achieve 80% of the benefits of automated testing through 20% of the effect. By developing iteratively the test team can also focus on a subset of the tests to be automated and more easy mange the change and quality of the automated testing solution. Review the list of tests to be automated and rank each test based upon its ease to be automated, its complexity to be executed manually and the quality risk associated with the part of the application it will validate. Record the results of this activity so that it can be refined during later automation development iterations.
Select Top <X> Tests to Automate
Review the list of prioritised tests to automate and identify which tests will be automated during the next automated development iteration. Whilst it would be ideal to only automate those tests that ranked highest on our list, in reality a range of high and low priority tests typically need to be automated during each automated development iteration. This selection is often driven by the dependencies between the tests. It may be necessary to automate a low priority test to support the execution of a one or more high priority tests. Avoid including too many tests within an automated development iteration. Clearly record the tests you plan to automate in this automated development iteration so that the test team has a precise definition of the scope and know when to stop and review.
Assign Automation Activities
Based upon the agreed scope of the automated development iteration, record which tester will be responsible for analysing, designing and developing each automated test script.
Setup Test Environment
Before each automated test script is created, ensure that the test environment is set to a predefined state. By knowing the state of the test environment when the automated test script is created it will allow the test team to produce more reliable results when using the automated test script to validate future release of the application. Typically this will involve some form of basic resetting of the operating system, restoration of underlying databases to the require state, and the setup of any peripheral devices, such as printers. While some tasks can be performed automatically, some aspects typically require human attention.
Set Recording Options
Before each automated test script is created, ensure that IBM Functional Tester is correct configured to support both the application under test and its specific domain (such as .NET, Java or Web). If IBM Functional Tester is not configured to work with the specific domain then its ability to interact with the application under test will be greatly reduced.
Record Test Script
The easiest way to initially create the majority of automated test scripts is by using IBM Functional Tester’s record facility. This will automatically amend new Test Objects to the Test Object Map and provide a basic script that can be later refined to become more resilient to change and easier to maintain. Using the project’s naming conventions, create a new Automated Test Script and begin performing the steps to reproduce the test. Once complete, add additional comments to the Automated Test Script to increase readability. As a rule of thumb, if you are automating an existing manual test script, each step within the manual test script should become a comment in the automated test script.
Refine Test Script
Many automated test scripts will benefit from being manually refined after they have been initially recorded. This process is performed for a variety of reasons and can be as simple as reformatting the automated test script code or removing unnecessary action precision (such as removing specific coordinates when they are not required). Other refinements may include the introduction of a datapool or a custom recognition mechanism.
Replay Test Script (Against the Same Application)
Regardless of whether an automated test script was refined after recorded or not, the test should always be executed against the same version of the application used to initially develop it. Using this approach, a test team can confidently identify any failures or error produced by the automated test script as issues with the test itself and not the application. If the test passes successfully then no further action needs to be taken. If the test fails then the ‘Debug Script’ activity should be performed.
When an automated test script fails against the release of software used to initially create the test then three possible actions can follow. (1) A major issue exist with the automated test script and the easiest way to fix the problem is to start again and rerecord the test. (2) A minor issue exists with the automated test script and a small manual refinement will fix the problem. (3) The source of the problem is unknown or the fix for the problem is complex and unclear. Using the test log and the debugging facilities offered by IBM Functional Tester discover in which category (1, 2 or 3) the problem belongs. Take the necessary action to fix the problem or if the problem cannot be easily fixed performed activity ‘Capture Script Issues’.
Capture Script Issues
To keep momentum within a given automated development iteration, if a significant problem is discovered that has no immediate fix or requires significant custom code to be developed then leave the automated test script in a known condition and record the script issue in a convenient location. As a minimum, each script issue should contain the name of the script(s) that contains the problem, the severity of the script issue in terms of how it will affect automating the remainder of the application, a headline that briefly describes the problem, and finally a detailed description of the problem.