Skip to main content

Is artificial intelligence making software testers obsolete? Probably not. AI is more likely to make software developers obsolete. But there’s no doubt AI means a major change in software testing. This post discusses some of the changes we are likely to see quite soon.

Long ago, test automation replaced the testers doing robots’ work, i.e. executing tests designed by someone else. Testers did not vanish. Their time was released for designing more and better tests and, obviously, automating them. Later on, agile methods and DevOps were supposed to make the tester redundant because developers would test their code while building it. It turned out that most developers preferred to write code than test it. We also learned that one needs to test the whole business process across application boundaries. Software testers extended their scope to exploratory testing and end-to-end testing.

How will AI Transform Software Testing

AI will transform the role of the tester by becoming a great helper in activities testers don’t do very well. AI is likely to shape the whole process of testing rather than just improving individual tasks.

It is characteristic of a tester’s work that the same things get done several times with little variation and a lot of waiting. Testers wait for:

  • a new release to test.
  • the automated test runs to complete.
  • a fix to a bug that prevents further testing.
  • And so on...

Testers also carry out tasks that require a lot of attention to detail and may not be very motivating, such as analyzing the results of automated test runs, maintaining automated tests that failed because of changes in the application, or calculating and reporting the testing status. Every minute they spend on these important tasks, someone else is waiting for test results.

Human software tester playing chess with AI robot software tester
This image was created with a simple prompt using an AI tool

The bulk of work in testing is test execution. It is largely automated but keeps consuming human effort, too; and that human effort is slowing down the whole test cycle. We still need humans to automate the tests to be run by a robot, to maintain those tests when the application changes, and to analyze the test results. There are already AI-enabled test tools that can select and order tests based on how likely they are to detect errors, auto-heal broken tests, or pre-fill defect reports. 

The Immediate Impact of AI for QAs

In the next wave of automated test creation, we’ll see tools that generate tests directly from a use case description or even more directly by observing a human tester exploring the application. This is a great step from automated testing towards autonomous testing. We may not be quite there yet, though. Most applications and business processes are proprietary, and the data needed to train the AI is scattered around in different software testing tools. Therefore, we are likely to first see human-supervised testing rather than fully autonomous testing.

Most software teams have too few tests rather than too many. With the use of AI, the size of the test asset is likely to grow. While running automated tests is practically free, the speed of getting test results still matters. Therefore, it would be beneficial to arrange the regression runs so that those tests that are most likely to detect errors will be run first. If the AI understands the lifecycle data well enough, it can look at the code changes and former test results, determine how to arrange the tests, and thus accelerate the feedback cycle.

If AI is able to select tests based on how likely they are to detect errors it should also be able to predict where the errors are likely to be found. A human tester may benefit from this information too, by focusing manual exploratory testing efforts on those features.

For many people, AI is a synonym to the natural language generation capabilities demonstrated by ChatGPT. Those capabilities have their place in testing, too. If AI knows how to create a test case it also knows how to describe test execution in a defect report. It is likely that an AI-generated defect report is easier to understand than a report hastily written by a human while rushing to check the next test case.

In addition to defect reports, testing produces a lot of quantitative data about the quality of the software and even about the process creating the software. The problem is that people close to testing, who understand the data, don’t care to explain it to those who should understand, and those who should understand may not care to listen. AI can do a great favor for both parties and their leaders by transforming test statistics into verbal insights, conclusions, and recommendations. 

Will AI Replace QAs?

It is tempting to imagine which is easier for AI to replace: a tester or a developer. It is easier to generate a test than to generate code. On the other hand, generating meaningful tests requires more imagination. After all, coding transforms a requirement into code that makes the application work, while testing should transform a requirement into one or more tests that should make the application break. 

On a large scale, test design is simpler than coding, though. Building a large information system is much more challenging than writing code for a single function. Designing tests for a large information system is not much more difficult than designing tests for an individual feature. It’s just more work. The test setup may be much more challenging, though. 

For example, setting up and verifying a test that creates an order in a mobile app that gets processed in Salesforce and then flows for fulfillment in SAP is a very complex task—although the test scenario itself is quite straightforward. We will need AI-assisted humans as well as human-assisted AI.

Even these small examples prove that things will be very different already in the near future. AI will improve the speed and productivity of building software, but it will not yet be able to build the software and assure its quality autonomously.

If I were a full-time software tester today, I would not worry about AI taking my job. I would worry about how I can test the AI.

Interested in more information on AI and Software Testing? Then sign up for the QA Lead newsletter for all the latest trends.

By Esko Hannula

Esko Hannula is VP Product Line Management at Copado, a DevOps and testing solution for low code SaaS platforms that run the world’s largest digital transformations. Backed by Insight Partners, Salesforce Ventures and SoftBank Vision Fund, Copado accelerates multi-cloud, enterprise deployments by automating the end-to-end software delivery process to maximize customers’ return on their cloud investment.