Performance testing saves companies millions of dollars. According to a report by Dun & Bradstreet, 59% of Fortune 500 companies experience a minimum of 1.6 hours of downtime each week. Let’s do the math real quick. On Average, a Fortune 500 company employs 52,810 people. If each employee earned only $10 an hour, it would cost a company $528,100 a week in lost productivity, or $27,461,200 a year.
That’s a lot of money that just…disappears.
It is in everyone’s best interest that software performance testing is done thoroughly.
In this article I’ll walk you through the importance of performance testing, the different types, common problems and useful tools. Sit back and read through the article or jump to any sections you’d like.
Performance testing checks that software can run at a high level under the expected workload. Developers want to avoid creating software that’s responsive and fast when only one user is connected, but becomes sluggish when dealing with multiple users.
QA testing isn’t only concerned with bugs. A software’s speed, responsiveness, and resource usage are important concerns. Performance testing concerns itself with performance bottlenecks.
Bottlenecks are identified by simulating user traffic. Ideally, QA testers want to run performance tests under real-world circumstances. This is one of the software’s first voyages out of the safe haven of ideal circumstances and into the uncertain world of end user experience.
Why Do Performance Testing?
Performance Testing provides stakeholders with concrete information about the speed, stability and scalability of the software. Without performance testing, software runs the risk of suffering issues with speed and reliability on release.
We already mentioned the productivity cost of when a system crashes. In many cases that alone is reason enough for performance testing. Though not every software is used internally, lots of it is shipped and sold to customers. In that scenario, performance testing becomes even more important because the last thing you want is a hoard of customers upset with the quality of your product.
Why Does Performance Testing:
Reduce Productivity Cost by Reducing System Crashes.
Ensure that the software is fast, stable, and can handle multiple users.
Discover performance bottlenecks.
In the developmental stage, performance testing provides a clearer picture of what needs to be improved upon in terms of speed, stability, and resource usage. Without performance testing, software could release with several serious errors such as: running slowly with multiple users, crashing due to user overload, and inconsistent user experience across different operating systems and browsers.
Testing for bugs won’t give a clear picture of how the software will perform under load. It’s important that performance testing is carried out independently and with the sole purpose of finding bottlenecks. Good performance testers know that speed isn’t the only marker of performance. For example, an application that loads fast but uses 100% of the users CPU isn’t performant. Such an example would cause the end user many headaches, such as overheating, shortened CPU lifespan, slower performance across multiple applications, and occasional crashes.
How To Do Performance Testing
How you do performance testing varies depending on the software.
There are several different approaches a QA tester can take to performance testing. It’s a form of non-functional testing, meaning that it’s largely unconcerned with the user interface.
Performance testers want to make sure the internal components are as fine tuned as possible. They’re the pit crew in an F1 race.
The type of performance testing that’ll be carried out depends on the methodology the organization is using. If they’re following the traditional waterfall approach to software development then the performance testers likely won’t get their hands on the product until development has finished. However, if the organization is using the agile methodology, then agile performance testing will likely happen throughout the development process.
What Are The Different Types of Performance Testing?
There are 5 main types of performance testing.
Some require manual testing and others are automation testing, though with the rapid increase of automation testing and more effective, reliable tools being developed each day, there is a significant preference for automation testing in these situations.
Automation testing is preferred because performance testing requires many virtual users to run the software as if they were actual end users. This is hard to replicate manually as it would require a lot more testers than the team is likely to employ.
Just because much of performance testing is handled by machines doesn’t mean there aren’t important distinctions between types of tests. A QA tester should understand the different types of performance testing so they know the best tool for the job.
Here I’ll break down the different types of performance testing, what separates one from the other, and what each type of test hopes to accomplish.
Capacity Testing: Tests how many users the system can handle before performance dips below acceptable levels. By testing a software’s capacity it helps developers anticipate issues in terms of scalability and future user-base growth.
Load Testing: Confirms that the system can handle the required number of users and still operate at a high level of performance. This ensures that there is no day to day issues in performance.
Volume Testing: Checks that the software can handle and process a large amount of data at once without breaking, slowing down or losing any information.
Stress Testing: Intentionally tries to break the software by simulating a number of users that greatly exceeds expectations. The launch day of a new iPhone, and the sudden spike in user traffic on the Apple website is a good example of a stress test in the real world.
Soak Testing: Simulates high traffic for an extended period of time. Checks the software’s ability to tolerate extended periods of high traffic.
4 Common Performance Problems
Performance testers will typically run into at least one of these four problems during testing.
Long Load Time
Poor Response Time
1. Long Load Time
Nobody enjoys staring at their screen for 30-60 seconds waiting for an application to load. It’s boring, especially if it’s an app you open multiple times a day. Except for some seriously beefy software, most applications, web pages, and software should be able to open in under a few seconds. Load tests typically catch any software that has trouble opening within an acceptable time frame.
2. Poor Response Time
Similar to the problem of long load times. It’s equally frustrating when, after finally opening the app, navigating between menus or inputting data also takes 30-60 seconds to complete. Think about the apps you use everyday, how many of them make you wait around for a page to load? Likely not very many. Long load times make people lost interest.
3. Poor Scalability
This is casually referred to as the ‘slashdot effect’ or the ‘internet hug of death.’ Have you ever heard about an interesting website someone shared with you on Facebook, only when you clicked on the link the page didn’t load? That’s because you and a million other people all wanted to check out the cool little site and it didn’t have the infrastructure to handle a sudden influx of users. A graph of their site traffic would look something like this:
Bottlenecks are caused when a system poorly allocates its processing power. If your software requires the users CPU to run at 100%, that means they have no memory left to spare to run additional tasks. This is bad and often leads to overheating and significant drop in performance. There are several places bottlenecking can occur. Some of the most common are:
Which is the best performance testing tool?
Which performance testing tool is right for you depends on your project and your objectives. Some, such as Jmeter, are very good at running load and stress tests. It also depends on your budget. Smaller companies may opt for one of many high-quality free open source performance testing tools in order to reduce costs.
Open Source Performance Testing Tools
One very popular open source performance testing tool is Jmeter. It has been a go to option for small companies looking to run effective tests. Jmeter performance testing is carefully analyzes the server performance under load. It allows you to execute load and stress tests to check if your software can handle the normal and maximum number of expected users.
When the performance tests are finished, Jmeter allows you to view the test results in a number of easy to comprehend ways. One option is to print your results in graph form. Here’s what Jmeter graphs look like:
When reading the graph, the most important parameter is the throughput (greenline), which shows the number of requests sent during the test. The higher the number the better. It shows how many requests your software can handle per minute. In this example, it’s 8,003 requests per minute.
There are some drawbacks to open source performance testing. One is that all the simulated users are ran on company servers. This means the tests are being done in ideal performance conditions, opposed to real world conditions. In small companies that don’t expect a substantial amount of load, this may suit their needs fine. As the company scales up, however, they may begin looking at purchasing a premium performance testing tool.
Here’s a brief list of some premium performance testing tools: