Examining barriers to Continuous Testing in the enterprise

Examining barriers to Continuous Testing in the enterprise

The promise of DevOps was faster service at higher quality. Testing has served as the boat anchor for that promise. Here’s why, and what to do about it.

Despite what the name implies, Continuous Testing (CT) is actually a strategy to reduce test cost. The term refers to a particular sort of tests, namely, those that can be automated and run by a computer, without human assistance. A computer program runs just after a build, creates a test environment, runs a set of automated checks and returns results quickly. This tightens the feedback loop for every change while reducing the risk that the developer broke something large. This strategy is particularly good at finding regression bugs, where a change to a feature impacts other features outside the planned scope of testing.

More for CXOs

A 2018 report by Sauce Labs found that 88% of the organizations surveyed were using Continuous Integration (CI), that is, they were building the software with every change. The same survey found that 87% of survey respondents had management support for test automation initiatives yet only 28% actually had a large amount of test automation coverage.

And coverage is the key. The goal is to be able to do enough testing to ship code every time a programmer completes a minimally marketable feature.

To get there we need to overcome what Wolfgang Platz calls the “three nightmares of test automation” in his book “Enterprise Continuous Testing.” These are test maintenance, test data, and test environments.

SEEAn IT pro’s guide to robotic process automation (free PDF) (TechRepublic)

Test maintenance

When people talk about Continuous Testing, they usually mean driving the user interface, the same way a customer would. That software is under development, which means it is changing. So the tests will fail, because the software no longer does what it was supposed to yesterday. Instead, the software does what it is supposed to do today. The tests do not know that, and report failures, that a human has to check, debug, repair, and re-run.

For quality, the fail->fix->pass loop can actually be good, as it ensures a double-check. It also drives up costs. Platz points to new innovations like model driven testing, that allow a single change to fix a large number of “failures” due to maintenance. You can also accomplish that with re-usable modules or debugging strategies that are becoming more common in test tools today.

Platz’s second nightmare is getting the data right.

Test data

Imagine a seeded database with all the perfect test information and re-planned test scenarios, including insurance order dates. Over time, the dates get stale; the “claims” are now too old to be accepted as new claims. Or the database may change. Regulated industries may need to test with production-like data, but not be allowed to actually use production data.

Platz’s suggestion is to use tools to generate the test data, something he calls synthetic test data. Meanwhile, new tools are emerging in this space.

Test environments

While most organizations have Continuous Integration, one thing that is still rare in my consulting is self-service on-demand test environments. For CT to work, the build system needs to be able to spin up a test server, likely in the cloud, for just that build, and run the tests against it. When Platz did his research he found 63% of respondents agreed that test/QA was the bottleneck in software delivery. I have to expect that waiting for a test environment was a huge part of this delay.

Building self-service test environments is often the No. 1 form of automation I can see benefiting organizations. If you’re waiting a day for a test environment, a smoke test that runs in 10 minutes instead of an hour won’t help.

An alternative: Better engineering

If the main pain point of Continuous Testing is maintenance, and the goal is to reduce testing, then the answer may be to do less of it. That is, instead of trying to automate every possible path on the user journey (which is infinite), instead use a variety of other techniques to reduce deployment risk. This switches the focus, from mass inspection with every release, instead to an architecture with fewer regression bugs, more human testing within each feature, and less time-to-live when features are on production.

Earlier, I wrote that coverage was the key, because in order to release continuously, we need to test all the things, all the time. To be more clear, high coverage is presented as the key. The real barriers end up being the infrastructure, along with other good practices designed to reduce regression defects.

Write better code, and you find out you need fewer tests.

Also see

screen-shot-2020-02-10-at-4-20-57-pm.png

Source of Article