Skip to main content

Automation, Thinly Sliced

It cannot be denied that test automation has revolutionized software testing. And just in time too! In today’s world of ultra-distributed, containerized, continuously updating services, meaningful software testing would be impossible without it.  

We are fortunate to have so many sophisticated automated testing tools at our disposal. And, might I add, the well-trained QA engineers to use them for everyone’s benefit in the development effort.

However, for some reason, it is the nature of our industry to habitually reduce such significant advancements in capability to fads, and these fads into meaningless word salads and slogans that are robotically repeated by upper managers, pundits, and consultants who just want an excuse to stop thinking about the problem. Or to make a quick buck.  

So it is with the breakthroughs in test automation, which have quickly been roped into the reductive narrative of, “we just need to automate all our testing! Right now! And all our testing problems will be solved!”

Not only is this a terrible idea, even if it were to be proposed by people who are trying to take the problem seriously and not just avoid it. It is also pernicious because it is a narrative that, by foreclosing the opportunity to think deeply about how to integrate automation into testing efforts, ensures its failure.  Which is unfair to all the people (including customers) who depend on its success.

In this respect, test automation is just inheriting a special instance of the general approach to SQA by those who don’t, and don’t want to, understand it. It is treated like a bulk good, not a complex expertise. This is why people in QA keep hearing from upper management, “we just need more QA!” as though there is a software deli somewhere where you can order it by the pound, thinly sliced.

Software Deli Infographic

This is why now the mantra is, “we just need more automation!” with no thought given to what that would really mean in the context of your software development efforts. As such, this discourse feeds the dysfunctions of your organization. It does not fix them.  

Trying to directly resist these faddish tendencies is like trying to keep the sun from rising. Or, in this case, setting. The best course of resistance is to just nod and smile at the VPs and consultants, go back to your cubicles and figure out the right way to do automation, and then present those ideas as the brilliant products of your managers’ imaginations. But you’ve probably learned that lesson already on other issues.

So let’s do that then. Here is my list of the pitfalls of implementing test automation, and ways to mitigate them, so that automation can fulfill its considerable promise in your own testing efforts.

Discover what’s new in the QA world.

Discover what’s new in the QA world.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Automation Scope

The most important question to answer at the beginning is where automated testing gives the best bang for the buck in your testing efforts. Taking the time to do this will save you many, many headaches down the road.

Answering this question boils down to the question of how much of your testing needs to be automated, and how much needs to remain manual. This may be surprising to hear to those of you incessantly bombarded by the gospel of “automate all testing!”. But pay no attention to that nonsense, these people have no idea what they are talking about.

Automating all your testing - assuming this is even possible - would be a terrible decision that could only lead to disaster for your testing efforts.  

Automation Scope Infographic

There are marvelous things automation can do that manual testing can’t, or at least not as quickly and repeatedly. But the opposite is also true. There are things only manual testing can do well, that automation cannot. What are those things in each case?

Where manual excels automated testing is simply the human factor. The benefit of having an actual human patiently doing deep exploratory testing simply cannot be duplicated by automation. And I’m not just talking about bug discovery.  

Anyone can find bugs. Customers do it for free all the time. The value a professional QA adds is that intuitive sense of how to break a piece of software, especially in the sense of determining how the software can be used by customers in ways it was never designed to be used, and yet—often with catastrophic results—can. 

Another advantage of manual testing is that, having found a bug, a manual tester can immediately engage in determining its scope and severity. That is to say, narrowing down the test contexts (OS’s, workflows, intra-service interactions and dependencies) where it specifically manifests, and those where it does not. 

Automated test scripts can’t do this very well at all and, from a software engineer’s point of view, this is really the crucial information they need to diagnose and fix any bug. A bug report that simply states, “I did this and this bad thing happened” is useless to them.

Manual testing is in fact much more time-efficient in providing this information and, being inherently interactive and in the moment with engineering, is far more information-rich in feedback and root cause analysis.  

Yes, Virginia, manual testing is not always the least efficient, time-consuming way to find, diagnose and fix bugs, and validate (or not) their fixes. 

Where automation has an obvious advantage, however, is in continuous testing for uptime and stability—particularly for distributed systems. This is something that is far more central now given our current development context of continuous integration and deployment into distributed environments in real-time.

Automation is also more efficient at what you might call “bulk testing”, where you have a huge matrix of thousands of system conditions and their variables to cover in order to see if any of them are completely broken or will bring the system down.

The rule of thumb you should use to guide your designs on where to prioritize either manual or automated testing is this: 

Manual testing is usually preferred for the initial testing of new features and capabilities. Automated testing is clearly best for continuous general regression, and for load and performance testing.

Over the lifecycle of a product or service, testing of the same capability should evolve from manual to automated. It is a capability lifecycle continuum. Not a Chinese wall, never to be breached, with neither side needing to be aware of what is happening on the other. The default assumption should be that what is manually tested today should transition to automated testing over time.

Automation Engineer Qualifications

It is perhaps inevitable that when hiring automated testing engineers, the key qualifications desired will be the candidates’ levels of proficiency in (a) the automation tools they will be expected to use, and (b) the test scripting languages used by those tools.  

Automation Engineer Qualifications Inforgraphic

But if these are the only qualifications you are focusing on, you are making a big mistake. These are necessary, but hardly sufficient qualifications for the automation engineer role.

Why? For the simple reason that knowing how to use an automation tool, and how to create scripts in its scripting language, tells you exactly nothing about their understanding of QA itself.

Unfortunately, I almost never see automation engineers who have this training or background. They get hired simply because they’re scripting wizards, not because they have any skill in designing an effective, diagnostic test.

These qualifications are actually more important than experience with the automation tools and languages they will be using. They can easily learn these if they need to. But it takes a very long time and effort to train someone how to design tests, and interpret their results.  

It would in fact be better to just take an experienced analyst you already have and get them trained on the necessary automation tools. These automation skills are, after all, now a commodity. The intuition and insight of an experienced tester are not.

Automating ignorance just gives you ignorance faster, and more repeatably and undermined the efficiency you were looking for from automated testing. 

Automation Design Standards

Automation Design Standards Infographic

It is a no-brainer that, if you are embarking on a large-scale test automation effort, you will first need to define general standards that all automation must conform to in order to be accepted for use in testing. Yet, as with many no-brainers, it seems there are fewer brains around than one might expect.

Here are what those standards should cover.

Intelligibility

One of the frustrating mysteries of software development is how artisanal it turns out to be. It is, unfortunately, not at all a rare experience for a software engineer to tell you that they can’t fix a bug because they didn’t write the code where it appears. It’s as though code written by another engineer is in a foreign language they don’t understand.

This is also, sadly, equally common in test automation engineering. I can’t tell you how many times a QA engineer has told me they don’t know how to update a piece of test automation for a major product upgrade because they did not write it, and therefore don’t, and can’t, understand it. Oh, and the engineer who did left the company two months ago.

This is even less acceptable in the case of automated test scripts than it is in the case of new software code that is implementing design logic, not just making certain steps occur. Yet it is astonishingly common.

What this means is that before you hire a dozen QA engineers and set them to merrily churning out automated test scripts by the hundreds, be sure you have first defined, and trained on, general standards of intelligibility.  

It needs to be a requirement that all test scripts must be mutually intelligible to any of your QA engineers, so that you don’t wind up being single-threaded for its upkeep on a single, often transitory, resource.  

Define and enforce a review/acceptance process of all automated script candidates against these standards before they can be deployed in testing. 

Maintainability

Maintainability is closely related to the problem of intelligibility, however the two are logically and operationally separate. A test script may be easily understood by test engineers who did not write it and yet still be structured in such a way that it is a beast to update or modify.

Here’s a real-life example. 

At one of my employers, we were preparing the test effort for a medium-level upgrade to their flagship product. It was scheduled for a four-week release cycle, which was actually reasonable in this case.

As I was planning the testing effort required, I realized we also would have to update the main automated regression test script. When I asked the lead QA engineer how long that update would take, he answered “eight weeks”. Twice the time for the entire release! Even granted the engineering sin of inflating estimates, this was extreme.  

I had a few other test engineers review the script and the estimate. They all agreed the script was written in such a clumsy and inefficient way, it would take many weeks of painstakingly rewriting the entire script to update it for the next release, even though the updates themselves were not fundamental rewrites of the product.

That situation was a classic example of a software engineering sin being teleported into test engineering. It’s time to stop the madness.

My advice here is identical to that I give above on the issue of intelligibility. Don’t, under any circumstances, let your QA engineers go hide in their test engineering caves and emerge a week or two later with something that might work as a test, but will be a nightmare of meaningless suffering to update in a timely fashion.

Institute standards for timely update-ability and create a review and acceptance process to ensure that they are. This can be easily merged into the same efforts to ensure intelligibility, so it can all be part of the same review process.

Your QA engineers are not Renaissance painters, you’re not asking them to repaint the Sistine Chapel. It shouldn’t take a lifetime.

Test Roles and Test Automation

One of the inefficiency patterns that inhibits the smooth and fruitful integration of automation into your testing effort as a whole is the mistaken assumption that every step in the automated testing process must occur within the automation group itself.

This is another mistake. Though in this case, not entirely an obvious one.

Automation engineers are probably going to be the highest-paid resources in your team so their time is precious. Moreover, the volume of test scripts that will need to be created and continuously updated by that team is only going to grow over time, and yet your pool of test engineers will not be able to grow nearly fast enough to keep up. Not even if you work for Google.

It makes operational sense, then, to introduce some divisions of labor between the automation team and the rest of your team. Specifically, the division between resources that create and maintain automated scripts and tools, and those that actually run the automated tests.

Clearly, the former responsibilities can only be carried out by the automation team itself.  That is, after all, why they were hired in the first place. Yet this is not true of the latter.  

There is no reason that analysts should not also be able to run automated tests on their own and interpret their results.  

Very few people think in these terms, but this division of labor makes every kind of sense. For one thing, it frees up significant amounts of time to allow the automation engineers to focus on creating new automation.

For another, it will reinforce the requirement of intelligibility defined above. If automation is central to your testing effort, then it must be equally central that everyone in your test team, automation engineer or not, should be able to understand and use your automated tests. Even manual testers.

Such a system of role definition will greatly increase the resource flexibility of your entire QA team, and therefore also create time and schedule efficiencies that would not otherwise exist. 

Final Thoughts

Developing a robust automated testing capability, as defined above, is simply a necessity today. One cannot talk about professional, effective QA without it.  

Yet many bright adventures begin in enthusiasm and gladness, only to end in defeat by the end of the journey. I see this happening in QA with respect to test automation in many, many instances.

The problem is that automated testing is relatively easy to dive into. Expensive, but easy to at least pretend you’re making the effort. However, automated testing can also be a cliff with you like, Wile E. Coyote in the Roadrunner cartoons, heedlessly running off it and falling the second you bother to look down.

It needs to be understood from the beginning that test automation has a life cycle. Every test script has a life cycle of months, if not years.

It’s not just a question of writing a bunch of automated test scripts that meet all your needs for the product as it currently stands in its development. It’s a question of how you will be able to seamlessly evolve all that automation as the product evolves over its own lifecycle.

If you ignore the problems of automation scope, QA engineer qualifications, intelligibility, maintainability, and role definition, you will find that, with time, all that automation effort has become a white elephant and a money pit that is impossible to understand or maintain.

And all the money you spent on it will have been wasted. Something your bosses’ bosses will not fail to notice.

On the other hand, if you follow the advice I give above, adapted of course to the particulars of your own challenges and situation, you will largely avoid this crisis of obsolescence and enjoy the fruits of productive test automation for years to come.

As always, best of luck.

Related Reads:

Worth Checking Out: WHAT IS MABL? OVERVIEW & TOUR OF FEATURES

By Niall Lynch

Niall Lynch was born in Oslo, Norway and raised in Fairbanks, Alaska, 100 miles south of the Arctic Circle. He received a BA in Religion from Reed College, and an MA in Ancient Near Eastern Literature Languages from the University of Chicago. Which of course led directly to a career in software development. Niall began working in software in 1985 in Chicago, as a QA Lead. He knew nothing about the role or the subject at the time, and no one else did either. So he is largely self-taught in the discipline. He has worked over the years in the fields of file conversion, natural language processing, statistics, cybersecurity (Symantec), fraud analysis for the mortgage industry, artificial intelligence/data science and fintech. Learning how to adapt his QA methods and philosophy to these wildly different industries and target markets has been instructive and fruitful for their development. And his. He now lives in Palm Desert, California, where he does SQA consulting and is writing a couple of novels. Send questions or jokes to NiallLynch@outlook.com