Skip to main content

Editor’s Note: Welcome to the Leadership In Test series from software testing guru & consultant Paul Gerrard. The series is designed to help testers with a few years of experience—especially those on agile teams—excel in their test lead and management roles.

In the previous article, we outlined a risk manifesto to help managers. In this article, we shall ask the time-honored question “How much testing is enough?”. Spoiler: it’s down to the stakeholders.

Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!

No matter what project, organisation or approach, there is always a place for documentation. Good documentation is a godsend, providing a useful record of approach, scope, plans, designs and the outcomes of analysis, development and test activities. 

In this article, I’ll cover:

Let's go.

The Value Of Documentation

In structured projects, documents are usually deemed to be deliverables in their own right. In Agile or continuous regimes, documentation might be produced as a by-product of more or less value.

Documentation can be a Valuable Repository of Knowledge

Document-writing may be the primary activity of professional technical writers but, for most practitioners, it’s a chore — no matter how useful it may be. Whilst document writing may be boring for some people, the real problem with documentation is that, in many contexts, most documentation is simply a waste of time. It is of little value, out of date, inaccurate — or all three.

Every test manager has written test strategies that people did not read or buy into. Testers write reams of test plans, scripts and reports and the only content of value to stakeholders are the one-page summaries at the start or end.

We have all written documents that we know have little value and no one will read.

This stems from common problems with documentation that we encounter in projects both large and small. For every document we write, there are several questions we need to address:

  • What type of document? A policy or strategy, an approach or plan, a design or implementation, or an outcome and interpretation?
  • What is the aim of the document?
  • What content is required to meet that aim?
  • What sources of knowledge are required to create the content?
  • If the document must change over time, how will it be maintained?
  • What level of detail is required?
Living Documents
As a test manager or as a team, you’ll need to figure out what types and formats of documentation are appropriate and, if they are to be accurate records or so-called living documents, how they are maintained.
Discover what’s new in the QA world.

Discover what’s new in the QA world.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

The Perils of Templates and Cut/Paste

If your project determines that a certain document is required, say a system test plan or a risk-register, it is tempting to find an off-the-shelf template for these (and many other document types) on the internet.

Some templates may claim to adhere to some standard or convention and to have been downloaded and used thousands of times. Sometimes a template may appear to suit your purpose exactly. But, as we’ll see, even if the table of contents looks comprehensive it can get you into trouble.

It may also be the case that you or others in your company have prepared a similar document for previous projects. You may be tempted to copy and rename this document, change the references to the old project, and edit the content to suit.

Warning: In my experience as an independent reviewer this is very common and often a real problem. 

Firstly, it’s usually blatantly obvious that a copy/edit has been done. The language used in the text often seems disconnected from the project and there are gaps and superfluous text everywhere. Why is this?

Using a pre-formed template or existing document as a source carries several risks:

  • It looks comprehensive, but it might include topics that are inappropriate and exclude others that are essential.
  • It provides headings for a document, but absolutely no guidance as to what content is appropriate for each heading.
  • It might contain text that looks reusable and is copied unchanged from a previous unrelated project, but that text might give a wrong impression, or be inaccurate or incomplete.

Using templates to get headings and basic formatting layout might be helpful, but the main problem with templates is this:

Template Might Save Some Time
Using a template might save some time; the risk is you won’t put enough thought into the writing.

The temptation with templates is to overly trust them and then write waffle for the various sections. After all, you might think the document completes a ‘tick box’ and no one will read it anyway. The risk of templates is you stop thinking and write a document that has little value.

Types of Test Documentation

In this section, we’ll look at the various forms of test documentation and discuss some considerations in structured or agile/continuous projects vs traditional waterfall. 

The core set of test documents tend to fall into the following categories:

  • Policy and strategy (sometimes known as Master Test Plan)
  • Test definition (aka specification or test plans, confusingly)
  • Test design
  • Test cases
  • Test procedures or scripts
  • Test execution
  • Schedule
  • Log
  • Test report

The above range of document types covers the definition of the test process, key activities of definition and execution and reporting. 

There are several other test-related documents that, in more bureaucratic environments, would include test environment definitions and management processes, acceptance procedures, incident management processes and so on (we’ll be covering incident management in a future article).

Another obvious omission from the above would be an overall plan or schedule for test activities. A schedule isn’t really a test document, it’s a subset of an overall project plan for a structured project (we’ll also be covering schedule planning in a future article, so stay tuned!).

Policy, Strategy, Master Test Plan

Purpose
  • A policy usually covers an organization and includes a subset of topics covering all projects. A strategy usually covers a single project (or application)
  • Overall, the strategy provides decisions made on logistical questions of approach, handoffs, responsibilities, environments etc.
  • Some of these decisions can be made ahead of time and documented in the strategy
  • Some decisions can’t be made now, but the strategy can document the process or method or information that will allow decisions to be made (in project)
  • For uncertain situations, or unplanned events where decisions need to be made, the strategy will document the general principles (or process) to follow.
Content
  • Stakeholders, goals, key risks of concern
  • Test principles/approach to be adopted, e.g. risk-based test approach
  • Test process (stages of testing):
    • goals and scope
    • acceptance criteria
    • methods, techniques
    • deliverables (documents)
    • responsibility
    • Non-functional/Technical test activities
    • (Test) supplier management policy
    • Incident management process
    • Sources of test data
    • Test environments
    • Tools/automation strategy
  • Documentation formats/templates
Sources
  • Stakeholders, users, BAs, developers, operations
Maintenance
  • Usually, a one-off document defined for a project or program
Agile/Continuous Considerations The test strategy for agile projects using Scrum, for example, are likely to be rather brief and comprising just a few pages (if they are documented at all). The test process might not have stages, but there is likely to be a definition of testing at different levels. For example:
  • Testing in a sprint or iteration
  • Testing for a release
  • System Integration testing (with other systems or interfaces)
  • User testing (in sprint and/or release-level acceptance).
How tools are used in developer testing (and using BDD or TDD for example) is likely to go undocumented, but development teams would be expected to evolve an approach and liaise with other team members as they commit new code. The role of the tester might be to test features interactively as they are released by developers, or to act as a testing coach to the rest of the team. Like the use of tools and/or TDD, the way of working would evolve over time and might never be documented formally.

Test Definition (Design, Cases, Procedures)

Purpose
  • To demonstrate the flow or traceability between sources of knowledge and the tests to be performed
  • To document the coverage (against multiple models) of aspects of the requirements, system features, or user behavior
  • To enable stakeholders to review the scope, approach, coverage, and choices made in creating tests to be applied
  • To provide instructions for the performance of tests at some agreed level of detail.
Content
  • Scope of testing – at both a high level e.g. features and a lower level e.g. models of behavior
  • Coverage of tests versus items in scope (e.g. a requirements coverage matrix or other test model)
  • Test cases identifying features, pre-conditions, inputs, and post-conditions (including expected results)
  • Test procedures, reusable to execute selected test cases.
Sources
  • Stakeholders, users, requirements, designs, and specifications
Maintenance
  • In principle, with fixed requirements, there should be one agreed version of these documents
  • Where requirements or scope changes occur, testers will need to adjust the documents, maintain the traceable aspects of the documentation, and provide a configuration management or change log.
Agile/Continuous Considerations The area of test definition is where the approach to agile is most markedly different from structured projects. Potentially, testers who are focusing on features as they are delivered may not create any documentation at all. This is appropriate if there is a system-wide policy or charter for feature testing, for example. More likely, there would be a brief charter for testing each feature in an exploratory test session. A charter is like a plan for a short period of exploration. The charter would typically identify:
  • The scope of the test session – the feature(s) to cover and/or some specified functionality or behavior of the system
  • The aim of the session – to explore certain aspects of behaviors, to focus on some risk or mode of failure, to apply some selected scenarios
  • The duration of the session is typically 45-120 minutes. The session is necessarily limited in scope, but testers are free to explore outside the scope if they see it as valuable
  • A charter may aim to focus on exploration – to learn what a feature does, to identify specific behaviors worth testing, to assess how much testing/how many sessions are required to test a large or complex feature, to understand what test data might be required to test it and so on
  • A charter may focus specifically on testing a feature, but may highlight some areas that need more attention than others.
BDD tools and stories in a selected format, for example Cucumber or Gherkin formatted stories and scenarios, might provide the traceability and content that test designs and procedures do. Each scenario with ‘given/when/then’ clauses identify pre-conditions, inputs and post-conditions. They are referenced to a single feature, so they provide a minimal test case/procedure and are traceable to features at least. Tests that have been scripted to be run by tools may or may not have intermediate documentation. Teams rely more on observation of automated tests and tables of test data used by automated scripts than documented test designs.

Test Execution (Schedule, Log)

Purpose
  • To specify the running order of tests
  • To record the status of tests – run/not run and status
  • To provide the test execution results for reporting
Content
  • Test identifier, tester, date/time run, status
  • For tests that appear to show anomalous behavior (optionally):
    • Details of the test as run where details differ from the script
    • Actual v expected outcomes
    • Other observations, interpretation
    • Test status (defect, setup or environment anomaly etc.)
    • Observation or defect report id (where appropriate)
Sources
  • Test case/procedure inventory, testers
Maintenance
  • The schedule would change in line with the scope of testing and procedures amended, removed or added to the plan.
  • Tests are likely to be run several times as either re-tests or regression tests. The Log should hold a complete history for all tests in scope.
Agile/Continuous Considerations If agile/continuous projects do not commit to test definition documents, they compensate somewhat by encouraging testers to keep better logs of test execution. Where testing is performed in sessions against charters, the tester is expected to keep good notes of the tests they run. There are few dedicated test-logging tools that are more than notebooks, so many testers use simple text editors, note-takers or hardcopy notebooks. Logs tend to be used to record all the significant activity and observations in sessions while they perform them. A typical exploratory testing log would contain aspects such as:
  • Structure of the features explored (a map of the territory)
  • Observations, questions relating to features explored in the session
  • Models, lists, tables of test items and test ideas
  • Tests run, captured in enough detail to replay them
  • Anomalies found – failures, questionable behavior, slow responses, poor user experience and so on
  • Time spent on exploration, test setup, testing, investigation, bug logging, retesting, regression testing, non-productive time
  • Date/time of entry
Where testers log their session activity in note-taking or other tools, they use some markup or other domain-specific language to structure their notes. These can be parsed by simple in-house tools to provide summaries of activity for use in the test report. Tests executed by tools (whether proprietary or open-source) keep logs automatically. Typically these logs are query-able by the tool or user-written procedures.

Test Report

Purpose
  • To communicate the outcome of a test stage, selected tests or a test session
  • Can also apply to technical requirements or non-functional test activities, in which case the content would differ to match the test objective
  • To partially inform stakeholders enabling them to make a decision on acceptance or release of a system or subsystem.
Content
  • Test start/end times and duration
  • Test environment
  • Software and system version(s) under test
  • Goals and scope of testing (from test strategy, test definition)
  • Narrative summary of results
  • Features deemed to be working as required
  • Risks deemed to have been addressed
  • Outstanding tests of significance (failed or blocked)
  • Features partially and not tested
  • Risks partially or not addressed.
  • Test result details with output from test logs etc.
  • Workaround status for outstanding anomalies
  • Test analyses
  • Test progress and status over time
  • Incident statistics
Sources
  • Test strategy, test definitions, test log, incident reports
  • Much of the content of a test report will be the tools used to record test definition(s), the test log and the incident or defect log.
Maintenance
  • This is a snapshot in time for a test execution phase and is not maintained.
Agile/Continuous Considerations The purpose of a test report in an agile project could cover a single iteration or sprint, testing for a release, or a higher-level test phase such as integration or overall system acceptance. Whichever, the purpose is unchanged. Much of the content of a test report will be from tools or notes from testers. The narrative summary of results is written by a test lead or the tester for a smaller scale test phase. As usual, the report is likely to be less formal and there would probably be less raw data that could form the basis of sophisticated analyses. Certainly, during the iteration(s), progress through the iteration in terms of features or stories delivered, tested and approved by the users, might be recorded in a tool or a public KanBan board. In this way, the stakeholders are kept informed of progress throughout the iteration and there is less need for a formal report at the end of a period of testing. Visibility of progress is a key concern for agile teams. With regular, perhaps daily stand-ups around a Scrum board, for example, team members share their understanding of progress, question each other and agree a position (and next steps) constantly. In this way, a formal test report might never be required because the team is always informed and up to date. If the testers are keeping written notes of their session activities, then there is no analyzable data available to present automated reports, so session and progress reporting might be presented in a public, visual way. This requires a high degree of discipline and good communication skills on the part of the testers. The test lead or manager will have to provide a correspondingly informative report to stakeholders based on verbal reports.

Some Advice

The subject of documentation in projects is a sensitive one for testers as well as other project team members.

Most people regard writing documentation as a chore.

Here are some things to bear in mind in designing your documentation.

  1. Documentation must have a well-defined purpose and audience. If your audience doesn’t need the documentation, they won’t read it. If it doesn’t resonate with their own goals, they won’t buy into it.
  2. It is generally better to capture the record of an activity before or as you perform it. TDD captures tests before code is written for example. Test session logs should be captured during the session and not written afterward.
  3. The essential data to record an aspect of testing might be minimal. For example, a test logged in a notebook might be sufficient for the tester but can’t easily be analyzed. Perhaps a simple text log with some markup would be as fast to capture but could also be analyzed by a custom tool.
  4. Test procedures might not be necessary at all if testers know the system under tests well. Perhaps only a test objective or charter is necessary. Prepared test cases can be minimally documented in a spreadsheet.

Finally

Necessity is the mother of documentation.

If you prepare comprehensive documentation for your stakeholders and they don’t read it, it’s because they don’t see value in it.

It’s better to present a stakeholder with a blank piece of paper and jointly add the topics they need to see in a document and work from there. It may be they ask for reams of content, but what they actually need is rather simple. Keep asking, ‘why do they want this?’

Documents with Purpose
Design your documentation with purpose; don’t just provide content you think might be useful to others because you might have the data.

Thanks for reading, join us next time as we pull our sleeves and start on some test planning.

Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!

By Paul Gerrard

Paul is an internationally renowned, award-winning software engineering consultant, author, and coach. He is the host of the Technology Leadership Forum and the Programme Chair of the 2014 EuroSTAR Testing conference.