Skip to main content

One of the key developments in the industry's conception of the QA function over the last decade and a half is the assumption that it consists of three, and only three, skill sets. These are:

  • Some kind of process expertise, however vaguely or unhelpfully defined. Whether that process expertise is actually implemented is not all that important, it seems.
  • Knowledge of automated testing tools and their associated scripting languages.
  • The willingness to take all the blame.

Conspicuous by its absence not only in this list, but in the minds of hiring managers, is something one would think would be the most obvious of all: 

The ability to design an effective test. 

It would be comical, if it weren't so alarming, that almost no one thinks of this as a key QA skill anymore. Because, however a test is carried out, whether manually or through automation, you first have to know how to design one in the first place. Right?

There's one obvious problem with this exclusion of expertise in test design in favor of process and automation: 

Isn't it important to know that the person automating your tests knows what to define as a test in the first place? Since automating crap just gives you crap more quickly and repeatably? And being "agile" at doing something incompetently doesn't strike me as a real improvement. Except perhaps to the scrum master. 

A revival of the importance, and discipline of test design is long overdue. Consider this my tiny contribution to that effort.

Testing is nothing like simply "trying stuff to see what happens". 

Some testers are successful at this ad hoc, scattershot approach, at least in the superficial sense of "finding a bug"—which when you think about it, is not an efficient use of your team's budget, since customers find bugs for free all the time. If QA is able to uniquely add value to the development and release process, "finding bugs" cannot be that contribution, since it is generic.

What's missing from this ad hoc "process" is any certainty as to whether your "testing" has fully exercised the capability envelope of the feature—including capabilities and behaviors that may not have been formally specified by Product or even anticipated by Engineering in its implementation.

An insight which poses the question of how you would go about deriving this set of test cases/conditions in a reliably systematic way. 

Which really gets to the heart of what test design truly is, and needs to be. 

What Is To Be Done?

The first step is to make a few basic distinctions which will certainly already be familiar to most QA people intuitively, but which are rarely presented systematically. 

Let's try to do that.

Explicit vs Implicit Functionality

Let's begin with the distinction between explicit functionality and implicit functionality. 

The former is what most of us think of as "functionality". The features and capabilities that are formally specified by Product, and whose formal specification guides its implementation in Engineering. 

Because of this, how to test them may seem cut and dried, but even with respect to well-defined features this is not the case, as we will see later. But at least the level of specificity possessed by explicit functionality makes it easier to build a scaffolding of tests around it (or the illusion of that scaffolding).

Implicit functionality is a very different animal. 

Since, by definition, it consists of behaviors and responses to user or environmental inputs that were not formally defined or anticipated. Failure to adequately test, and conceptualize testing of, this implicit functionality is, by far, the source of most bugs introduced into the field after the product has been approved for release by QA (the other is inadequate testing in fringe hardware/software/device environments).

In other words:

Testing implicit functionality requires quite a lot of ingenuity and imagination. It is the truly creative part of software testing. 

Testing implicit functionality requires a certain amount of fiendish cleverness to really become good at it, and, sadly, that cannot be taught

No amount of agile process or automation will teach it to you—but it can be encouraged. 

So how do you go about defining testing for implicit functionality? Fortunately, it can be done. But before we dive directly into that question, let’s explore a few further relevant distinctions.

Discover what’s new in the QA world.

Discover what’s new in the QA world.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Positive vs Negative Testing

Most people working in QA for any length of time understand the difference between positive and negative testing. 

Positive testing is using the product as it was designed to be used. Negative testing is using the product as it was not intended. Like using your handheld hair dryer in the bathtub. 

Though these distinctions are conceptually clear, each of them contains nuances that are not often taken into account. The most obvious one is edge cases, which apply to both—even with respect to positive testing. This is more obviously the case with negative testing.

But both types of testing, despite their manifest differences, impose the same requirements on test design. Chief of which is, as always, to parameterize the problem posed equally by both. 

Now be aware that the specification will not give you all you need to know to do this. Being a slave to the spec—either Product's or Engineering's—puts your mind on rails. Relying too much on explicit definition is not helpful in this process. It is a specific example of what I call "the empirical fallacy", i.e., waiting for something to explicitly tell you what to do, or what is happening, before you can understand it. 

Proper parameterization of functionality is essential to successful test design, yet this skill is rarely explicitly taught. Yet this is, above all, an exercise in logic, not observation. That is to say, your thinking here must be guided by a clear sense of what is logically possible to happen to or within the system, either through user or environmental interaction, whether those possibilities were even consciously envisioned earlier in the product's specification or implementation.

Back in the software Cretaceous (i.e., the 80s), I did a lot of testing of a product that converted documents from one word processing format to another (it wasn't the Microsoft Word monoverse we have today). So I had to create huge test beds of documents to exercise our conversions on. 

Doing this I discovered many very odd capabilities in these programs that clearly their own developers had no idea existed, much less ours. One example: In WordPerfect it used to be entirely possible to insert a line spacing change command within a paragraph that only affected the lines of the same paragraph following it. Creating a situation where the same paragraph could have two different line spacings in effect. 

Of course this test document blew up the conversion. And for once I couldn't blame our engineers for not anticipating that madness.

This is the kind of clever thinking I'm talking about. Let's, then, discuss some of the relevant parameters that can hone this cleverness to a very sharp point.

Feature Scope Of Application

As in the ancient WordPerfect example given above, not all relevant, or imaginable, constraints have necessarily been foreseen when a feature was designed in Engineering. Which means a user may be able to deploy a feature in very inappropriate contexts, in very inappropriate ways.

This idea is akin to the notion of data validation, but applied to feature capability. E.g., you wouldn't expect to be able to enter a text string into a database field defined to only accept integers. Try to think through all the possible extensions to a feature's scope of application as you design tests to validate it. 

Recall that validating a feature or capability is not just a matter of verifying it works as designed, but also that it cannot be used in ways it was never intended to be.

Workflow Interruption Or Diversion

This topic assumes you are testing defined workflows as a matter of course. Because if you're not, you have bigger problems than this article can address. 

Assuming you are, be careful not to just test the so-called "happy path": the basic workflow as stipulated in the user story or requirement. In particular, be sure to include test cases where that workflow is interrupted, cancelled, and/or restarted. 

You might be surprised how many installation programs, for example, fail or become very confused when installation is interrupted or cancelled and then restarted. Or perhaps not, if you've done a lot of installation testing. 

You should also definitely add test cases where the user backtracks, returning to earlier steps to provide different inputs. This is also potentially a rich source of defects. 

In short, think of all the conditions that would cause some kind of kink or recursion to the basic workflow of the process or feature. Don't assume these would all have been captured in the specification(s).

Sequentiality

Particularly in highly complex software systems, where multiple simultaneous processes are active and interacting, it is important to understand that many serious failures may only occur if certain other events or interactions have occurred prior to the process that is failing. Let's call these antecedent contributing conditions. And the problem detected may *only* occur as a consequence of a particular set of antecedent conditions, steps or user interactions.

Likewise, and predictably, the opposite can be the case. That a failure of feature or process X may only become evident as a result of steps or interactions that follow it. That is, the failure may be masked at the moment it occurs until the user or system tries to do something later. And only then does it manifest itself. 

This is very common, and very well known, in the case of memory management. But it is equally common in other contexts. Let's call these subsequent contributing conditions.

Defining the scope of testing for sequentiality is one of the biggest challenges in test design. 

Why? Because it requires an intimate knowledge of how the various processes, services and events within a complex system may interact with one another, and for what purpose. 

You have to come up with something like a state transition model for all these interactions. Because some of them should never be possible within the system, yet nevertheless will turn out to be possible (wreaking havoc). Like having two different line spacing values in effect for the same paragraph. 

Yet, as noted above, the area of sequentiality is one of the richest sources of serious bugs in any complex software system, which, these days, is all of them. So please give it priority in your test design analysis.

Load, Complexity, Latency

Consider in your test design the system macro-factors of load, complexity and latency. All of these may affect the execution or completion of a request, process or event. 

The relevance of system load should be obvious. How requests or transactions are processed during periods of high system load stress may clearly fail (at any stage in the transaction process). This should be a default part of your test planning. Load, as we all know, can also be generated as a result of the request itself—i.e., a data query that, by itself, will trigger the processing and transmission of huge amounts of data.

Complexity refers here primarily to the complexity of requests asserted against a system. This complexity may consist of the number of conditions specified (and their exclusions and exceptions), the number of databases (virtual or otherwise) implicated in the requests, or the processing topology of the system itself.

I use latency in the context of this discussion to indicate the introduction of time lapses in the request process, which is not its usual meaning. I am really referring here to user latency, not response latency from the system itself. 

In other words, how does the feature or capability behave if the user comes to a certain step in the process, and then stays paused there, doing nothing? Does the system time out (it probably should)? Should it prompt the user? Should it remain in that state until the end of time? 

The answers to these questions may be provided in the specification, but the product under test may not in fact behave that way. Which is why we test to begin with.

User Roles

End users can, of course, interact with software systems in a variety of roles. The same user can interact with the system in a variety of different roles, depending on what they are doing. 

But it is also the case that processes and services themselves can also have different roles and associated privileges. In either case, be sure your test design and planning, whether for features or capabilities, takes into account and exercises all possible role-states.

Conclusion

There are traditional distinctions in the vocabulary of test design that have become habitual, even to those outside of QA. Their very familiarity can be a stumbling block when trying to come to grips with the complexities inherent in competent, comprehensive test design. This is the case with positive vs. negative testing. Because all of the focus areas discussed above apply equally to both. And can be difficult to discern through the haze of linguistic habit. 

Note, as well, I have not discussed so-called "smoke testing" which is just a nice way of saying, "We have no idea what we're doing, but we're doing it anyway." That concept should have no place in your QA vocabulary or practice.

I will not pretend that the discussion presented above is in any way exhaustive. It is merely a preface to an introduction to a table of contents. It is meant above all to stimulate your own thinking on the subject, and how to improve the discipline of test design as a whole.

I welcome any insights or ideas my readers may have. Best of luck in your QA efforts.

By Niall Lynch

Niall Lynch was born in Oslo, Norway and raised in Fairbanks, Alaska, 100 miles south of the Arctic Circle. He received a BA in Religion from Reed College, and an MA in Ancient Near Eastern Literature Languages from the University of Chicago. Which of course led directly to a career in software development. Niall began working in software in 1985 in Chicago, as a QA Lead. He knew nothing about the role or the subject at the time, and no one else did either. So he is largely self-taught in the discipline. He has worked over the years in the fields of file conversion, natural language processing, statistics, cybersecurity (Symantec), fraud analysis for the mortgage industry, artificial intelligence/data science and fintech. Learning how to adapt his QA methods and philosophy to these wildly different industries and target markets has been instructive and fruitful for their development. And his. He now lives in Palm Desert, California, where he does SQA consulting and is writing a couple of novels. Send questions or jokes to NiallLynch@outlook.com