Managing regression risk with evolving manual and automated test cases
In an Agile project, as each iteration completes, the product grows. Therefore, the scope of testing also increases.
If the full set of tests are repeated for previous increments as the product grows, this is the effect.
The lower line on the graph below shows the new tests that are needed as the increments grow in single units. The upper line shows the ever-increasing cumulative testing for the whole product.
Along with testing the code changes made in the current iteration, testers also need to verify no regression has been introduced on features that were developed and tested in previous iterations.
The risk of introducing regression in Agile development is high due to extensive code churn (lines of code added, modified, or deleted from one version to another).
Since responding to change is a key Agile principle, changes can also be made to previously delivered features to meet business needs.
To maintain velocity without incurring a large amount of technical debt, it is critical that teams invest in test automation at all test levels as early as possible.
It is also critical that all test assets such as automated tests, manual test cases, test data, and other testing artifacts are kept up to date with each iteration. It is highly recommended that all test assets be maintained in a configuration management tool to enable version control, to ensure ease of access by all team members, and to support making changes as required due to changing functionality while still preserving the historic information of the test assets.
Because complete repetition of all tests is seldom possible, especially in tight-timeline Agile projects, testers need to allocate time in each iteration to review manual and automated test cases from previous and current iterations to select test cases that may be candidates for the regression test suite, and to retire test cases that are no longer relevant.
Tests written in earlier iterations to verify specific features may have little value in later iterations due to feature changes or new features which alter the way those earlier features behave.
Deciding which tests to automate
While reviewing test cases, testers should consider suitability for automation.
The team needs to automate as many tests as possible from previous and current iterations.
This allows automated regression tests to reduce regression risk with less effort than manual regression testing would require. This reduced regression test effort frees the testers to test new features and functions more thoroughly in the current iteration.
Test design and maintenance
It is critical that testers can quickly identify and update test cases from previous iterations and / or releases that are affected by the changes made in the current iteration.
Defining how the team designs, writes, and stores test cases should occur during release planning.
Good practices for test design and implementation need to be adopted early and applied consistently. The shorter timeframes for testing and the constant change in each iteration will increase the impact of poor test design and implementation practices.
Automation at all test levels
Use of test automation, at all test levels, allows Agile teams to provide rapid feedback on product quality. Well-written automated tests provide a living document of system functionality.
Automation and configuration management
By checking the automated tests and their corresponding test results into the configuration management system, aligned with the versioning of the product builds, Agile teams can review the functionality tested and the test results for any given build at any given point in time.
Automated unit tests
Automated unit tests are run before source code is checked into the mainline of the configuration management system to ensure the code changes do not break the software build.
To reduce build breaks, which can slow down the progress of the whole team, code should not be checked in unless all automated unit tests pass. Automated unit test results provide immediate feedback on code and build quality, but not on product quality.
Automation and continuous integration
Automated acceptance tests are run regularly as part of the continuous integration full system build.
These tests are run against a complete system build at least daily, but are generally not run with each code check-in as they take longer to run than automated unit tests and could slow down code check- ins.
The test results from automated acceptance tests provide feedback on product quality with respect to regression since the last build, but they do not provide status of overall product quality.
Automated tests can be run continuously against the system.
Build verification tests
An initial subset of automated tests to cover critical system functionality and integration points should be created immediately after a new build is deployed into the test environment.
These tests are commonly known as build verification tests. Results from the build verification tests will provide instant feedback on the software after deployment, so teams don’t waste time testing an unstable build.
Automated regression tests
Automated tests contained in the regression test set are generally run as part of the daily main build in the continuous integration environment, and again when a new build is deployed into the test environment.
As soon as an automated regression test fails, the team stops and investigates the reasons for the failing test. The test may have failed due to legitimate functional changes in the current iteration, in which case the test and/or user story may need to be updated to reflect the new acceptance criteria.
Alternatively, the test may need to be retired if another test has been built to cover the changes.
However, if the test failed due to a defect, it is a good practice for the team to fix the defect prior to progressing with new features.
In addition to test automation, the following testing tasks may also be automated:
- Test data generation
- Loading test data into systems
- Deployment of builds into the test environments
- Restoration of a test environment to a baseline
- Comparison of data outputs
Automation of these tasks reduces the overhead and allows the team to spend time developing and testing new features.
When you are ready, select Next to continue.
This section discusses the differences between testing in traditional and Agile approaches, the status of testing in Agile projects, and the role and skills of a tester in an Agile team.
A world-leading tech and digital skills organization, we help many of the world’s leading companies to build their tech and digital capabilities via our range of world-class training courses, reskilling bootcamps, work-based learning programs, and apprenticeships. We also create bespoke solutions, blending elements to meet specific client needs.