Skip to main content

A lot of discussion about QA tends to involve grand concepts and topics. Lots of jargon and white papers and TED talks. It can get pretty abstract pretty fast. And, let's face it, often not terribly relevant to the practical obstacles working QA teams face on a daily basis. 

So sometimes it's useful to slow our roll and focus on some of the humbler aspects of the discipline. In this case, test documentation. Seemingly a simple subject, but, as with all things related to software development, monsters lurk beneath the floorboards.

The question of how, and how much, test documentation to write almost immediately involves QA in a Catch-22. Too little, and it's hard to know where you stand with respect to the effort as a whole. And even harder to reassign testing tasks to different resources as the need dictates and preserve consistent testing. Yet too much and — oh wait. There is never too much test documentation, is there?

Or is there?

I have often seen QA teams conscientiously, diligently embark on the heroic task of completely documenting their tests. For one release. And then that documentation tends to fall by the wayside. Gathering digital dust on a network somewhere. And not because the QA team has stopped believing in its importance. They're just completely exhausted by the effort necessary to update it for the next major release. 

Because, in their zeal, the QA team wrote incredibly detailed test descriptions, breaking things down to a maniacally microscopic level of hyper-specificity. Creating separate test documentation for every facet or attribute of a single feature or user interaction. Generating dozens of individual test descriptions and tasks for what is basically a single testing task. It's like that person you land behind in the line at the grocery store who insists on paying for $80 worth of groceries with small change.

The result of this atomizing zealotry is the generation of hundreds, sometimes thousands of test descriptions for the release in question. QA has, unwittingly, wound up writing a Dickens novel. A Tale of Ten Thousand Cities. Ever actually read one of those? Me neither.

The upshot is that no one has the time or the patience to update these thousands of individual test descriptions. Because the priority will always be testing the next iteration of the software instead, since that ultimately generates revenue while updating test documentation does not. 

But also because test documentation applications don't make it at all easy to do mass updates of test documentation records. Making global changes to classes of tests can be arduous and non-intuitive in most applications (this is also true of many defect tracking apps — even today). Adding to the costs in time and mental health of doing so.

As a result, all that meticulous test documentation winds up being abandoned. And all the time spent creating it is, ultimately, wasted. Because it's not reusable. It's a one-hit-wonder. It's the "I Melt With You" of software testing. Yet test documentation is an absolute necessity for a repeatable, systematic testing effort. Hence the Catch-22.

There is a way out of this dilemma. And, as much as it pains me to admit this, we should look to engineering for the solution. Or at least for its inspiration. Engineers faced a similar issue in the early years of commercial software development. Due to the limitations of the coding languages of the time, for each function, they had to rewrite code for the same basic attributes, actions, and safeguards. Over and over again. Memory management/garbage collection was a good (and dangerous) example of this problem.

As a result, engineers generated huge amounts of redundant code and wasted huge amounts of time (Not that engineers necessarily mind the latter. Ebay still exists for a reason.). Then some smart cookie came up with object-oriented programming, where coding objects could be created that inherited default attributes by their very nature. The same code didn't have to be written (or cut and pasted) each time. Which meant quality no longer depended on any particular engineer's memory or typing skills. For which QA is eternally grateful.

This is a very useful vision for QA as it grapples with the cul de sac of test documentation. The concept of object orientation as it exists in engineering can't be applied directly to this problem, but as a metaphor, it has much to offer. Cleverness, by definition, easily adapts to new situations.

I have many times in my career faced the problem of how to create test documentation that is comprehensive but concise, actionable in the moment but also easily reusable for future releases. And the solution I finally settled on is to apply the idea of an "object" to test documentation itself. 

Here I'm talking about far more than a test documentation template. These are very useful as a standardization tool, of course, but irrelevant to the problem at hand. I developed the idea that tests could be documented in a way that included, in one piece of documentation, all its various modes, inflection points, workflows, and special conditions. Once this light bulb went off in my dim head, the answer suddenly seemed very simple. And very doable.

Let's get into that.

Taking The Red Pill

The answer is to see any individual test, and therefore also its documentation artifact, as not the assertion/description of testing against a single data point or isolated condition. But as a description of the entire matrix (see what I did?) of conditions in which the feature or capability needs to be validated. 

This involves a holistic approach to individual test definition. A coherent, single testing task cannot usefully be splintered into dozens of independent "tests" without, paradoxically, erasing specificity into the testing of the feature as a whole. It's a forest for the trees situation.

Here it is useful to review the concept of a test context. Because to implement the idea of a test object effectively, you must first be able to separate out the unchangeable core of a piece of functionality from its secondary contexts of application and operation. This is a question that the atomic style of test documentation avoids, and so testers never learn how to perform this analysis systematically and consciously. Yet another drawback to atomic power.

Put simply, test contexts for a feature or system are the set of conditions, environments, workflows or states that can vary independently of one another within the boundaries of the feature itself. Operating systems and OS versions are a clear example of this. Foreign languages are another. System states are yet another (i.e., in the case of a web server, memory caching on or off). Once you grasp this distinction, you can easily think of many others relevant to the types of products you are testing.

Once you have completed this analysis (which should be the foundation of all professional test planning anyway), you then have the ability to create compact forms of test documentation that internalize this distinction. In the system I am describing, a single test documentation artifact will consist of:

  1. A description of the core feature or capability under test.
  2. A list of all the contexts in which the core functionality must be validated.
  3. Anything else you would normally have included anyway (preconditions for the test to be valid, system resources and privileges necessary to run the test, a link to the relevant product requirements, etc.). None of these is replaced by what I am proposing.

Now at this point you might be thinking, "Hey, that could be a lot of contexts!". Well, sure. But look at it this way. Doing it like this doesn't create any more work for you. In fact it reduces it. Drafting comprehensive test plans becomes easier with our curated list of software testing tools for documentation. Because this method will *greatly* reduce the number of individual tests you have to spend time creating, and that you will have to maintain (or refuse to) in the future. Whereas in the atomic system, you would have to create an individual test documentation artifact for each one of them. Which leads to the Catch-22 described at the beginning of this article.

This method of test documentation does have certain process consequences that may at first seem inconvenient. Or even unsettling. And not just to QA, but to other project stakeholders. Yet, if all involved take the time to understand them, they will see they are really improvements. For everyone.

Chief among them is that, by bundling the matrix of feature contexts into the main test itself, this means, logically, that if that single test fails in any one of those contexts, then the whole test has failed. Even if it's just one. I can see this causing panic in the ranks of engineering. Because it may seem you are stacking the deck against them, raising the bar to an impossible level for a test to count as "passed". But this fear is easily dealt with. Just point out two things to them, and a third to yourself:

  1. This method will actually reduce the number of bugs generated against their work by testing. Since with the atomic method, the failure of any one of those contexts would have produced its own separate bug. Thus upping the aggregate bug count against that single feature. This should put a smile on engineers' faces. And project managers'.
  2. Second, testing using this methodology automatically provides context for the actual scope of the functional failure. Because what is the first question the engineer assigned to fix the bug is always going to ask you? "Uh, well, does it happen everywhere? Or just in context X?" And instead of having to rush back to your desk and rerun the test in context X (because maybe you forgot to do that in your original testing?) you will already have that answer, and the engineer will be deprived of an excuse to avoid working on it (what QA giveth in fewer bugs, it taketh away later).
  3. Third, the context analysis for the test documentation ensures that you actually thought about all the relevant contexts when documenting the test in the first place. Which means you're less likely to be having those panicked moments — three days before or, worse, after release — where you realized you forgot to test in some very important ones. A panic-based QA effort is not a very effective one. Nor a terribly happy experience psychologically.

In Machina

This object-inspired method of test documentation will drastically reduce the amount of documentation artifacts you have to produce, and therefore make it more likely you will actually be able, and willing, to continually update them for further releases.

The one fly in this ointment is that few test documentation applications are set up to document tests in this way natively. They tend to assume the atomic documentation model, since it is sadly the norm, and so have no built-in functionality to easily accommodate the object model I have described here. 

This is why often I have resorted to much less elaborate application solutions — like Excel — which are actually, for this purpose, far more flexible and easy to customize, because they are general purpose. The downside to that strategy is that integrating them with Jira et al. becomes cumbersome. But then, perhaps integration is overrated.

In any event, the fact that many tests and defect documenting applications make it fiendishly difficult to do batch updates of individual records remains. Regardless of what protocols you choose to use in creating them. But that is true no matter how you choose to write your test documentation. One problem at a time, people.

Apologizing in advance (or in your case, in retrospect) for not giving you a test object template. It's been my experience that providing templates is usually a bad idea because people just start using your template. The availability of ready-made, generic templates tends to preempt and short-circuit the creativity of the team itself in devising a template that conforms to their actual needs, processes, and tools on the ground. So no hard feelings.

I am always open to your questions and suggestions. Just post or message them to me on LinkedIn.

And, as always, best of luck.

By Niall Lynch

Niall Lynch was born in Oslo, Norway and raised in Fairbanks, Alaska, 100 miles south of the Arctic Circle. He received a BA in Religion from Reed College, and an MA in Ancient Near Eastern Literature Languages from the University of Chicago. Which of course led directly to a career in software development. Niall began working in software in 1985 in Chicago, as a QA Lead. He knew nothing about the role or the subject at the time, and no one else did either. So he is largely self-taught in the discipline. He has worked over the years in the fields of file conversion, natural language processing, statistics, cybersecurity (Symantec), fraud analysis for the mortgage industry, artificial intelligence/data science and fintech. Learning how to adapt his QA methods and philosophy to these wildly different industries and target markets has been instructive and fruitful for their development. And his. He now lives in Palm Desert, California, where he does SQA consulting and is writing a couple of novels. Send questions or jokes to NiallLynch@outlook.com