Skip to main content

Editor’s Note: Welcome to the Leadership In Test series from software testing guru & consultant Paul Gerrard. The series is designed to help testers with a few years of experience—especially those on agile teams—excel in their test lead and management roles.

In the previous article, we explored service testing and its main components: performance testing, failover/soak testing, and manageability. As promised, here we’ll be exploring performance testing in a bit more detail.

Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!

Hello and welcome to the Leadership In Test series. In the last article, we looked at service testing for web applications. 

The purpose of this chapter is to give some advice and best practices for managing a critical component of service testing mentioned in that article, which is, drum roll please... performance testing!

We will cover:

Let’s go.

Performance Testing Objectives

As a quick recap, we can define the primary objective of performance testing as:

“To demonstrate that the system functions to specification, with acceptable response times, while processing the required transaction volumes on a production-sized database.”

Your performance test environment is a test bed that can be used for other tests, with broader objectives, that we can summarise as:

  • Assessing the system’s capacity for growth (If you're unsure which software can handle your needs, our list of best database management solutions can guide you.)
  • Identifying weak points in the architecture
  • Tuning the system
  • Detect obscure bugs in software
  • Verifying resilience and reliability.

Your test strategy should define the requirements for a test infrastructure that enables all these objectives to be met.

Four Prerequisites For A Performance Test

"If any of these prerequisites are missing, be very careful before you proceed to execute tests and publish results. Utilizing quality assurance automation tools can help ensure that these prerequisites are met. The tests might be difficult or impossible to perform, or the credibility of any results that you publish may be seriously flawed and easy to undermine.

Discover what’s new in the QA world.

Discover what’s new in the QA world.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

1. Quantitative, Relevant, Measurable, Realistic & Achievable Requirements

As a foundation for all tests, performance requirements (objectives) should be agreed prior to the test so that a determination of whether the system meets those requirements can be made. 

Requirements for system throughput or response times, in order to be useful as a baseline to compare performance results, should have the following attributes. They must be:

  • Expressed in quantifiable terms.
  • Relevant to the task a user wants to perform.
  • Measurable using a tool (or stopwatch) and at reasonable cost.
  • Realistic when compared with the durations of the user task.
  • Achievable at reasonable cost.

Often, performance requirements are vague or non-existent. Seek out any documented requirements if you can. If there are gaps, you may have to document them retrospectively. 

Before a performance test can be specified and designed, requirements need to be agreed for:

  • Transaction response times.
  • Load profiles (the number of users and transaction volumes to be simulated).
  • Database volumes (the numbers of records in database tables expected in production).

It is common for performance requirements to be defined in vague terms. These requirements are often based on guesstimated forecasted business volumes, it may be necessary to get business users to think about performance requirements realistically. 

You may also have to do some requirements analysis yourself and document these requirements as the target performance objectives. 

2. A Stable System

If the system is buggy and unreliable, you won’t get far with a performance test. Performance tests stress all architectural components to some degree. But, for performance testing to produce useful results, the system and the technical infrastructure has to be reasonably reliable and resilient to start with.

3. Realistic Test Environment

The test environment needs to be configured so the test is meaningful. You probably can’t replicate the target or production system but the test environment should be comparable, in whole or part, to the final production environment. You’ll need to agree with the architect of the system what compromises are acceptable and which are not, or at least what useful interpretation could be made of test results.

Creating a realistic test environment is essential for meaningful performance tests. For tools that can help you simulate real-world conditions, check out our handpicked selection of software testing platforms

4. Controlled Test Environment

Performance testers require stability. Not only in terms of the reliability and resilience of hardware and software, but also the minimization of changes in the environment or software under test. For example, if the interface is changed even slightly, test scripts designed to drive user interfaces are prone to fail immediately.

Any changes in the environment should be strictly controlled. If the change fixes bugs that are unlikely to affect performance you might consider not accepting the release. Only changes intended to improve performance or reliability could be accepted.

Performance Testing Toolkit

Your performance testing toolkit comprises five main tools:

  • Test Data Creation/Maintenance - to create the large volumes of data on the database that will be required for the test. We’d expect this to be an SQL-based utility, or perhaps a PC based product like Microsoft Access, connected to your test database.
  • Load generation – the common tools use test drivers that simulate virtual clients by sending HTTP messages to web servers.
  • Application Running Tool – this drives one or more instances of the application using the browser interface and records response time measurements. (This is usually the same tool used for load generation, but doesn’t have to be).
  • Resource Monitoring - utilities that monitor and log client and server system resources, network traffic, database activity etc.
  • Results Analysis and Reporting - test running and resource monitoring tools generate large volumes of results data for analysis.

Related Read: THE 10 BEST SQL ANALYTICS SERVICES FOR QA TEAMS

The Performance Test Process

Below is a figure showing a generic process for performance testing and tuning. Tuning is not really part of the test process but it’s an inseparable part of the task of improving performance and reliability. Tuning may involve changes to the architectural infrastructure but should not affect the functionality of the system under test.

Report Back on a Performance Test Infographic

Now we will look out how to develop, execute, analyse and report back on a performance test.

Incremental Test Development

Test development is usually performed incrementally in four stages:

  1. Each test script is prepared and tested in isolation to debug it.
  2. Scripts are integrated into the development version of the workload and the workload is executed to test that the new script is compatible.
  3. As the workload grows, the developing test framework is continually refined, debugged and made more reliable. Experience and familiarity with the tools also grows.
  4. When the last script is integrated into the workload, the test is executed as a “dry run” to ensure it is completely repeatable and reliable, and ready for the formal tests.

Interim tests can provide useful results

Runs of the partial workload and test transactions may expose performance problems. Tests of low volume loads can also provide an early indication of network traffic and potential bottlenecks when the test is scaled up. 

Poor response times can be caused by poor application design and can be investigated and cleared up by the developers earlier. Early tests can also be run for extended periods as soak tests.

Test Execution

Test execution requires some stage management or coordination. You should liaise with the supporting participants who will monitor the system as you run tests. The “test monitoring” team might be distributed, so you need to keep them in the loop if the test is to run smoothly and results are captured correctly.

Beyond the coordination of the various team members, performance test execution typically follows a standard routine.

  1. Preparation of database (restore from tape, if required).
  2. Prepare the test environment as required and verify its state.
  3. Start monitoring processes (network, clients and servers, database).
  4. Start the load simulation and observe system monitor(s).
  5. If a separate tool is used, when the load is stable, start the application test running tool and response time measurement.
  6. Monitor the test closely for the duration of the test.
  7. If the test running tools do not stop automatically, terminate the test when the test period ends.
  8. Stop monitoring tools and save results.
  9. Archive all captured results, and ensure all results data is backed up securely.
  10. Produce interim reports; confer with other team members concerning any anomalies.
  11. Prepare analyses and reports.

Coordinating various team members during test execution can be challenging. Streamline this process by integrating advanced test management tools designed for Jira, which offer features like real-time collaboration and reporting

Tuning usually follows testing when there are problems or where there are known optimisations possible. If a test is a repeat test, it is essential that any changes in the environment are recorded. This is so that any differences in system behaviour, and hence performance results, can be matched with the changes in configuration.

When it comes to managing test cases for performance testing, software for test management can be a game-changer. It allows for better organization, tracking, and even automation of test cases.

As a rule, it is wise to change only one thing at a time so that when differences in behaviour are detected, they can be traced back to the changes made.

Results Analysis and Reporting

The most typical report for a test run will summarise these measurements and for each measurement taken the following will be reported:

  • The count of measurements.
  • Minimum response time.
  • Maximum response time.
  • Mean response time.
  • Nth (typically 95th) percentile response time.

The load generation tool from your toolkit should record the count of each transaction type for the period of the test. Dividing these counts by the duration of the test gives the transaction rate or throughput actually achieved. 

The count of transactions is the load applied to the system. This assumes the proportions of transactions executed to match the load profile you are trying to apply.

The load applied should match the load profile simulated - but might not if the system responds slowly and transactions run at varying speeds.

Usually, you will execute a series of test runs at varying loads. Using the results of a series of tests, plot a graph of response time for a transaction plotted against the load applied.

Resource monitoring tools usually have statistical or graphical reporting facilities that plot resource usage over time. Enhanced reports of resource usage versus load applied are very useful and can assist identification of bottlenecks in a system's architecture.

Best of luck!

Sign up to The QA Lead newsletter to get notified when new parts of the series go live. These posts are extracts from Paul’s Leadership In Test course which we highly recommend to get a deeper dive on this and other topics. If you do, use our exclusive coupon code QALEADOFFER to score $60 off the full course price!

Related Read: SERVER MONITORING METRICS TO TRACK FOR SYSTEM HEALTH AND PERFORMANCE

By Paul Gerrard

Paul is an internationally renowned, award-winning software engineering consultant, author, and coach. He is the host of the Technology Leadership Forum and the Programme Chair of the 2014 EuroSTAR Testing conference.