Continuous Testing: Surveys show contradictions in how much is actually happening in the enterprise

Continuous Testing: Surveys show contradictions in how much is actually happening in the enterprise

Are companies doing Continuous Testing? Two recent surveys provide a few answers … and some caution.

While Continuous Testing is becoming a de facto standard for software groups in Silicon Valley, barriers to adoption persist, especially outside of the West Coast and for legacy software. Two groups, Capgemini and QA Supermarket, recently released surveys that paint startling–and contradictory–pictures about the state of the industry.

Capgemini’s 2020 Continuous Testing Report draws an optimistic view, stating that 55% of organizations they surveyed have “adopted a continuous testing approach”, with 37% using containers to automate the creation of virtual test machines, and 42% using artificial intelligence for predictive analytics. The QA Supermarket data was flip-flopped, with 44% of respondents doing actual-device (“real”) testing as their primary mechanism for all forms of software testing, and 13% reporting they were not testing at all. That has one survey saying 55% of groups are doing Continuous Testing, a relatively mature process that requires a fair bit of infrastructure or tooling, while another says that 57% are doing nothing or doing real end-to-end simulations with humans.

SEE: Hiring Kit: Python developer (TechRepublic Premium)

I spoke with Mark Buenen, the leader of quality engineering for Capgemini, and Paul Belevich, CEO of QA Supermarket, to understand how these surveys worked, who they surveyed, and to integrate the two perspectives.

More for CXOs

Survey methods

The Cagemini survey first identified 500 larger organizations, then sent the questionnaire to one single leader in the organization responsible for software. That might be a vice president, a CTO, or general manager. Instead of someone responsible for software, the QA Supermarket survey went out to 140 people involved in testing in some way, which could be a test lead, project manager, programmer, or development manager. The Supermarket data was mostly based on smaller companies, with only 40% working at a company with more than 100 employees, and much more fine-grained, asking what kind of software the person tests, what types of testing, and if the respondent felt the organization did enough. The first disconnect was in that data: While 82% of tech managers thought there was “definitely” or “probably” enough testing, that number dropped to 67% for QA engineers and QA managers. One of the common reasons for not enough testing: “The decision makers at my organization believe we do enough testing.”

SEE: 10 ways to prevent developer burnout (free PDF) (TechRepublic)

But there are 13% who are doing no testing. Belevich said the primary reason to make such a claim was that testing was happening outside of the team. For example, the customer might do some sort of formal user acceptance testing. Thus, “we” don’t do testing. Of course, programmers could click through screens, write unit tests, and do debugging as part of their work process, but testing might not exist as a formal, “external” role inside of the team. There might be no slot for it in the workflow, or there might not be a “tester” role. Given that explanation, I would expect the 13% number is artificially low.

The Capgemini Survey, on the other hand, was incredibly optimistic. It stated that 16% of the survey respondents were using “predictive test selection and optimization,” 14% were doing “release risk [AI] prediction,” 12% were doing “automatic defect remediation,” and 9% were running “self-healing test scripts.” To be frank, I cannot understand where these numbers came from. Adding up the numbers in figure 8 of the survey, I see they total 100%. It turns out the readers were forced to pick one. In context, the actual percentages were one thing the survey respondents, in 2019, expected to look into doing in 2020. On the plus side, I found AppSurify, a company that can analyze some types of code changes to run a subset of just the automated checks that could be impacted by the change. These tools are starting to emerge, slowly, but I am very skeptical of over-hyped solutions.

Explaining the disconnect

Buenen acknowledged a real gap between rhetoric and practice. “When we see what people are actually doing, the adoption at implementation is really very slow. Strangely with the adoption of DevOps, which should require automation, in many cases the opposite is true—the amount of automation is actually going down.”

In my own work, I split functional testing into two major categories. There is the testing of an individual feature, usually best done by a human. Then, there is an exploration of the entire system prior to release, sometimes called “regression testing.” By breaking the application down into different pieces, by rolling out only the module that changed, strong monitoring and fast-rollback, it is often possible for some teams to change the risk picture to eliminate the need for most regression testing.

Also, if you send a survey to one person in a very large organization, especially a decision maker disconnected from the work, they are likely to give the response for their single best-performing team or business unit. So, don’t feel bad that your team isn’t performing as well as some vice president believes the best performing team is of some Fortune 500 company. At the same time, many organizations don’t see testing as a documented, formalized process being done by someone with the title “tester.” That doesn’t mean it isn’t happening.

Take what you can from the surveys, but do your own thinking. If you want to pursue continuous testing, start with automating the build and delivery pipeline, including automating the creation of test data and test environments.

Also see

male hand typing on the keyboard a laptop. working man programmer. work from home. workplace with a laptop. selective focus. flare

Source of Article