Skip to main content

Editor’s Note: Welcome to the Leadership In Test series from software testing guru & consultant Paul Gerrard. The series is designed to help testers with a few years of experience—especially those on agile teams—excel in their test lead and management roles.

In the previous article, we talked about planning a test project and what must be considered. Now you’ve sharpened your axe so to speak, it’s time to execute.

Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!

Sharpened Your Axe Infographic

Are You Ready?

Gladiators, the time has come to do the job. The purpose of this article is to take you through how to execute a testing project. I’ll be covering:

Now, are you ready? There are four critical aspects:

  • People – is your team ready?
  • Environments – do you have the technologies, data, devices, interfaces to implement meaningful tests?
  • Knowledge – have you prepared your tests at an appropriate level of detail, or is your team ready and able to explore and test the system in a dynamic way?
  • System Under Test – is the software or system you are to test actually available?

The first three aspects are either under your control or you have the means to monitor, manage and coordinate action to provide people, environments, and knowledge. The system under test is another matter. If the system under test is delivered late, then you cannot make any meaningful start to the testing. This is the classic squeeze on testing.

The Classic Squeeze on Testing

Anyone who has tested systems has experienced late delivery of the system to be tested. At almost every level, from components to entire systems, developers encounter problems and delivery is either delayed or incomplete. In most circumstances, where a time period has been assigned to conduct the testing, the deadline does not move and the testing is squeezed. This forces teams to choose between quality and speed. For tools that offer the best of both worlds, take a look at our list of top software testing platforms.

Discover what’s new in the QA world.

Discover what’s new in the QA world.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Partial and Incremental Delivery

Although the complete system cannot be delivered for testing, some functionality – a partial system – can be delivered on the promise that later releases will contain the remaining functionality. At any time, the status of the features in a release will be:

  • Completed as required: these features are testable – at least in isolation.
  • Incomplete: features omit functionality and/or are known to be faulty.
  • Missing: postponed to be delivered in a later release.

With regards to the features that are available, it may be possible to test them in isolation. However, they may depend on data created by other features not yet available which could make testing them more difficult.

Features may be available, but the output of those features cannot be verified by other features not yet available, so examination of the before-and-after test database is required. It is almost certain that your end-to-end tests requiring testable strings of features will be mostly blocked.

In almost all respects, system-level testing of partial systems is severely hampered.

Defending Testing

Defending Testing Infographic

Your team must make progress, however dogged. If the system is not available, or is only partially available to the team, you'll have to manage expectations and defend your plan. 

Your plan for testing in the small or in the large depends on the system being available – this is an assumption – so your plan must change. You may not document formal entry criteria, but the message is the same:

Entry Criteria are planning assumptions – if these criteria are not met, your planning assumptions are wrong and the plan needs to be adjusted.

Whether you are waiting for the system to be delivered or you have access to a partial system you will run out of useful things to do quite quickly. Then you will have a difficult conversation with your product owner or project manager. If the deadline for completion doesn’t move, then you will be forced to do less testing. Some features may be tested less, or de-scoped from testing altogether.

The manager may believe you can make up time later, but this is hardly ever achievable in practice.

The testing time lost because of late or partial deliveries cannot be recovered through ‘working harder’.

Why is testing delayed? There are many possible causes, but they tend to follow one of these patterns:

  • Environments cannot be configured in time. People started late, were too busy, or not qualified to create a meaningful test environment. What is available is partial, misconfigured or incomplete.
  • Delivery is delayed because the scale of work was underestimated.
  • Delivery is delayed because the software is buggy, difficult to test and fix.
  • Delivery is delayed because the development team lack the skills, experience or competence in the business domain or the technologies they are using.
  • Delivery is delayed because development started late.
  • Delivery is delayed because requirements keep changing.

Excluded from the list above are acts of God and other external factors beyond the control of the project team. If the project manager insists that the deadline for testing does not change, and the scope of testing is also fixed, you have a significant challenge. In every case above, the causes of late delivery argue for doing more testing, not less.

If the development work is underestimated, then the testing probably is too. If the software is buggy, testing will take longer. If developers lack skills, then the software will likely be poor and take longer to test. If developers started late (why?) and the scope is unchanged, why should testing be reduced? If requirements change it’s likely your plans are wrong anyway – working to a bad plan inevitably makes life harder.

How many of the common causes of delayed delivery suggest that less testing is required? None of them. Defend your plan.

Reporting Success And Failure

Reporting Success And Failure Infographic

Most testers know that effective testing requires curiosity, persistence, and a nose for the problem. The pre-eminent motivation is to stimulate failures and create enough evidence for failures to be traced to defects that can then be fixed. 

Although finding (and fixing) defects is good for the quality of the product, communicating defects often feels like you are giving bad news to someone. This could be a developer who has made a mistake somewhere and has to fix it, or you could be reporting to stakeholders that some critical functionality does not work correctly and the system will be delayed.

No one likes to give bad news and it’s natural to feel reluctant to upset other people, especially close colleagues. But the sense that your news is good or bad is not a feeling the messenger should be concerned with. 

Defects are always bad news for someone, but the role of testing is not to be judgemental in this way. In some respects, the tester is like a journalist, seeking out the truth. The Elephant’s Child story by Kipling includes the lines:

I Keep six honest serving-men:
    (They taught me all I knew)
Their names are What and Where and When
    And How and Why and Who.

In the same way a journalist tells a news story, you are telling the story of what you discovered as you tested a system.

The truth – the news – may be good or bad, but your responsibility is simply to seek out both the problems and successes as best you can. You are attempting to discover what a system does and how it does it. 

At the very end of a project the goal is to deliver to production with as few outstanding problems as possible. You want all your tests to pass, but the journey to success is hampered by test failures that need to be investigated and resolved. Your tactical goal is to find problems fast, but your ultimate goal is to have no problems to report. 

To accurately report the success or failure of your testing project, consider integrating advanced test management tools that offer comprehensive analytics

You need to behave much like an investigative journalist – looking for the story with a critical and independent mind. As Kipling wrote:

If you can meet with Triumph and Disaster, and treat those two impostors just the same …

Then you will be keeping your head, and doing a good job for your project and stakeholders.

Coverage Erosion

Whatever coverage target(s) exist at the start of testing, several factors conspire to reduce the coverage actually achieved. Erosion is an appropriate term to use as it truly reflects the inchmeal reduction of the scope of planned tests and the inevitable realization that not all of the planned tests can be executed in the time available.

Coverage erosion has several causes prior to test execution:

  • Firstly, test plans identify the risks to be addressed and the approach to be used to address them. Plans usually assume a budget for testing – which is always a compromise.
  • Poor, unstable or unfinished system requirements, designs and specifications make test specification harder. Coverage of the system is compromised by lack of specification detail.
  • The late availability or inadequacy of test environments make certain planned tests impractical or not meaningful. Larger scale integration testing may be impossible to execute as planned because not all interfaces or interfacing systems can be made available.
  • Performance testing might be compromised because environments lack scale or don’t reflect production.
  • Late delivery of the software under test means that, when deadlines remain fixed, the amount of testing in scope must reduce.

Test execution coverage erosion during test execution also has several causes:

  • If the quality of the software to be tested is poor on entry to a test stage, running tests can be particularly frustrating. The most basic tests might fail, and the faults found could be so fundamental the developers need more time to fix than anyone anticipated. If testing is suspended because the software quality is too poor, you’ll be running late. If the deadline doesn’t move some tests will be de-scoped.
  • Where more faults occur than anticipated, the fix and re-test cycle itself will take more time and you’ll run out of time to complete all your tests.
  • When time does run out, and the decision to release is made, not all of your testing will be complete. Either some tests were never reached in the plan or some remaining faults that block the completion of failed tests. Where the go-live date does not move, this is the classic squeeze of testing mentioned above.

Dealing with test coverage erosion is one of the challenges the testers face in all projects. Things rarely go smoothly and reducing the time for testing (and coverage) is usually the only option to keep the project on track.

It is not wrong to reduce the amount of testing; it is only wrong to reduce the testing arbitrarily. Consequently, when making choices on which tests to cut, the impact on your test objectives and the risks to address needs to be reviewed. You might have some awkward conversations with stakeholders to get through.

Where the impact is significant, you may need to facilitate a meeting of those who are asking for the cuts (typically project management) and those whose interests might be affected by the cuts (the stakeholders). Your role is to set out the situation with regards to tests completed, the current known state of the tested system, the tests that fail and/or block progress, and the amount of testing that remains to be done.

Your plans and models articulate the original scope of testing and are critical to helping stakeholders and management understand the gaps and outstanding risks and make the decision to continue testing or to stop.

Incident Management

Incident Management Infographic

Once the project moves into System Testing and Acceptance Testing stages, it is largely driven by the incidents occurring during test execution. Incidents trigger activities in the remainder of the project, and incident statistics can sometimes provide a good insight as to the status of the project. When categorizing incidents, we need to think ahead to how that information will later be used.

An incident is an unplanned event that occurs during testing that may have some bearing on the successful completion of testing, the acceptance decision, or the need to take some other action.

We use a neutral term for these unplanned events – incident. However, these events are often referred to using other terms; some more neutral than others. 

Tests that fail might be referred to as observations, anomalies or issues – neutral terms that don’t presume a cause. But sometimes, problems, bugs, defects or faults are used – which presumes the system is faulty. However, this might be a premature conclusion and these labels can mislead.

We suggest you reserve the terms bug, defect, or fault for the outcomes of the diagnosis of failures in the construction of the system under test which usually generate rework for the development team.

Incidents manifest themselves in two ways:

  1. Failure of the system: the system does not behave as expected in a test i.e. it fails in some way or appears not to meet some requirement.
  2. Interruption to, or undermining of, testing or tests: some event that affects the ability of testers to complete their tasks such as loss or failure of the test environment, or data or interfaces or supporting, integrated systems or services, or some other external influence.

Failure of the System

These incidents are often of the most direct concern because they undermine confidence in the system’s quality. 

Interruptions and Undermined Tests

Some organisations do not treat these incidents as incidents at all – interruptions are part of the rough-and-tumble of projects reaching their conclusion. Undermined tests, being those where the environment or test set up is wrong might be blamed on the test team (for the setup or at least not checking the setup before testing).

In both cases, progress through the testing is affected and, if you are managing the process, you are accountable for explaining delays. For this reason, you should either capture these events as incidents or ask the team to keep a testing log and record environment outages, configuration problems or lack of appropriate software versions to test. If you don’t, you’ll have difficulty justifying delays in progress and it could reflect badly on you and the team.

To Log incidents or Not?

With the advent of agile and continuous delivery approaches, the traditional view of incident management has been challenged. In staged projects, incidents are treated as potential work packages for developers which are approved according to severity and/or urgency. 

There is a formal, often bureaucratic process to follow whereby incidents are reviewed, prioritised and actioned by the development (or other) team. Sophisticated incident management tools may be involved.

In smaller, agile teams, the relationship between tester and developer is close. The team as a whole might meet daily to discuss larger incidents but, more often than not, bugs are detected, diagnosed, fixed and retested informally without any need to log an incident or involve others in the team or external business or IT personnel. More serious bugs might be discussed and absorbed into the work for a user story, or bundled into dedicated bug fix iterations or sprints.

We discussed the purpose and need for documentation in an earlier article. That discussion is appropriate for incidents too. The team needs to consider whether an incident tool and process is required and whether it is helpful to the team and/or required by people outside the team.

Larger teams tend to rely on process and tools to manage incidents for three reasons: 

  1. To ensure that incidents don’t get forgotten
  2. That serious problems are reviewed by stakeholders and project management 
  3. To capture metrics that might be valuable during and after the project.

Separating Urgency from Severity

Whatever incident management process you adopt, we advocate assigning both a priority and a severity code to all of your incidents.

  • Priority is assigned from a testing viewpoint and it influences when an incident will be resolved. The tester should decide whether an incident is of high or low priority (or whichever of the intermediate degrees that may be allowed). The priority indicates the urgency of this fault to the testers themselves and is based on the impact that the test failure has on the rest of testing.
  • Severity is assigned from a user’s viewpoint and indicates the acceptability or otherwise of (usually) a defect. The end-users or their management should assign severity, or (more efficiently) override as necessary the testers’ initial view. The severity reflects the impact of that defect on the business if it wasn’t fixed before delivery. Typically, a severe defect makes the system unacceptable. If a defect is of low severity, it might be deemed too trivial to need fixing before go-live, but could be fixed in a later release.

If an incident stops testing and testing is on the critical path, then the whole project stops.

A high-priority incident stops all testing, and usually the project.

The important thing to bear in mind with incident classification schemes is that not every urgent incident is severe and that not every severe incident is urgent.

Managing the End-Game

Managing the End-Game Infographic

We call the final stages in our test process the ‘End-Game’ because the management of the test activities during these final, possibly frantic and stressful, days requires a different discipline from the earlier, seemingly much more relaxed period of test planning.

Remember, the purpose of testing is to deliver information to stakeholders to enable them to make a decision – to accept, to fix, to reject, to extend the project or abandon it entirely. 

If you have a shared understanding of the models to be used for testing it’s much easier to explain what ‘works’ (with respect to the model), and also where things fail to work and the risks of these failures. It is for the stakeholders to use this information to make their decisions – guided by you.

One of the values of adopting a risk-based test approach is that where testing is squeezed, late in a project, we use the residual risk to make the argument for continuing to test, or even adding more testing. 

Where management insists on squeezing the testing, testers should simply present the risks that are being ‘traded off’. This is much easier when an early risk assessment has been performed, used to steer the test activities, and monitored throughout the project. When management is continually aware of the residual risks, it is less likely that they will squeeze testing in the first place.

And that’s all folks, good luck with your testing!

Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!

By Paul Gerrard

Paul is an internationally renowned, award-winning software engineering consultant, author, and coach. He is the host of the Technology Leadership Forum and the Programme Chair of the 2014 EuroSTAR Testing conference.