Continuous testing and delivery have become the new paradigms in software development and delivery with the widespread adoption of Agile and DevOps by organizations across the globe. With customer experience being at the forefront of every release, there is tremendous pressure on QA to enable consistent delivery of high-quality applications at great speed. That being said, how does QA manage to keep up with these rising expectations?
Well, there are many testing tools, frameworks, and processes that have evolved with these new methodologies and can help meet the high expectations, provided they are implemented effectively. However, companies still encounter many risks as part of the agile delivery process despite having the latest tools and accelerators. One of them being timely release decision-making and a heavy dependency on the pace of test execution and reporting. Let’s take a closer look at the major factors contributing to the breadth and pace of test execution and discuss a few specific methods to alleviate risks associated with agile continuous delivery.
The ‘ever-growing’ test repositories
QA teams strive to improve ‘test coverage’ as part of every sprint and release. A variety of test techniques are adopted in sprints to test and break the application with the intent to deliver the best working software. In the process, the test case repository (especially the regression set) incrementally grows after every sprint, with new test cases getting added for reasons such as these:
- Functional test cases to cover the new features being developed in the sprint
- Negative and edge cases that may have an indirect but major impact on a feature
- New test scenarios uncovered during exploratory testing
- UAT bugs caught by the business
- Production bugs caught by the end customer
The above are a few factors contributing to the test case addition, but the number of test cases that get added will also depend on the nature and complexity of features. It doesn’t even stop with these. There could be dependencies with internal or external applications that may have to be covered in testing, which further increases the size of the test repository.
Now, what does that mean for testing effectiveness? As the regression test repository consistently grows, it increases the test execution coverage risks with every sprint. How does it pose a risk to the release decision? With new features, bug fixes, enhancements, and changes that come in as part of each sprint, a ’bulk run‘ of regression tests becomes a necessity based on the impact of code changes and dependencies. There can also be multi-platform, multi-device scenarios that contribute to an overall increase in test execution time as well as delays.
The ‘last-minute’ challenge
The consistently growing regression test repository is just one part of the whole problem. There is another major challenge that comes up very often and makes the whole Agile delivery/release process even more complicated. You too would have faced this in your Agile projects. It is nothing but ‘last-minute’ decision changes pertaining to the build and go-live. The reasons can be many. Let’s look at a few common factors contributing to these:
- Feature development delay from the development team, resulting in a last-minute integration to the release build
- UAT uncovering ‘Critical’ and ‘High’ severity defects that become ‘must have’ for the release
- QA detecting ‘Critical’ and ‘High’ severity defects late in the release sprint that become ‘must have’ for the release
- A high number of defects getting injected due to ‘last-minute’ build changes as well as a lack of unit testing
What does all the above mean for QA?
Yes, as always, immense pressure gets mounted on the QA team to make a quick turnaround on the retesting of last-minute fixes and, most importantly, on a thorough regression testing as well. The entire product or project team anxiously waits to get feedback from QA to plan the release steps or to undertake a not-so-easy release date negotiation with the business.
In this scenario, the pace of test execution is all that matters to achieve the ability to run a large number of test cases within a short period. Automated testing accelerates test execution (The ‘test often’ principle in continuous delivery) and is now an integral part of the regression test cycle in every organization. But, is that enough? You may still end up executing just a portion of your regression test repository if the tests are not well prioritized and intelligently allocated or scheduled to complete within the available time.
This can pose a serious threat to the go-live decision making, with the business owners getting neither timely insights nor adequate confidence on test coverage from QA during the ‘Go/No-Go’.
The role of distributed testing
Distributed testing can play a key role in minimizing some of these test execution and coverage risks. It is an efficient process by which the tests are prioritized and run ‘in parallel’ to achieve faster test results. Many organizations are using this technique already with automated testing. However, if not done effectively, it may not yield the desired results. The strategy used to prioritize and organize tests for distributed test execution, the resources (infrastructure, its ability to scale, connected components, etc.), and the driving tool/framework should play a combined role to make it a success. We will take a closer look at the key considerations for distributed testing in the next series.