Performance Testing Basics What is Throughput?

Performance Testing Published on:
performance testing concepts - simultaneous users

Throughput is one of the most misunderstood performance testing concepts new testers sometimes struggle with. So what is throughput?

Basically, “Throughput” is the amount of transactions produced over time during a test. It’s also expressed as the amount of capacity that a website or application can handle. Also before starting a performance test it is common to have a throughput goal that the application needs to be able to handle a specific number of request per hr.

Let's Imagine Throughput in the real world

For example: Let’s imagine that a gas station attendant fills up a car's gas tank using a gas pump. Let’s also say that it always takes the gas attendant just one minute to fill up any car, no matter how big it is or how low the car's gas tank is.

Let’s call this gas station “Joe’s Gas,” and envision that it only has three gas pumps. Naturally, if we have three gas pumps and three cars, it follows that Joe's attendants can only fill up three cars per minute. So, if we were to fill out a performance report for Joe's gas station, it would show that Joe’s throughput is three cars per minute.

Throughput no wait

This is Joe’s dilemma: no matter how many cars need gas, the maximum number that can be handled during a specific time frame will always be the same –three. This is our maximum throughput; it is a fixed upper bound constraint.

throughput

As more vehicles enter the gas pump line they are required to wait, thus creating a queue.

It is the same concept that applies if we are testing a web application. If a web app receives 50 requests per second, but can only handle 30 transactions per second, the other 20 requests end up waiting in a queue. When presenting performance test results, throughput performance is often expressed as transactions per second, or TPS.

Real performance testing throughput results:

I use HP's LoadRunner (which comes with a throughput monitor) for performance testing. But other tools like jMeter have similar meters. In a typical test scenario, as users begin ramping up and making requests, the throughput created will increase as well.
Once all users are logged in and processing in a steady state, the throughput will even out since the load each user makes stays relatively constant. If we wanted to find an environment’s throughput upper bound we would continue increasing the number of users. Eventually, after a certain amount of users are added, the throughput will start to even out, and may even drop. If the throughput enters this state, however, it is usually due to some kind of bottleneck in the application.

A look at a typical throughput results

Below are the LoadRunner throughput chart results for a 25-user test that I recently ran. Notice that once all 25 concurrent users are logged in and doing work, the throughput stays fairly consistent. This is expected.

Good Throughput Chart

Now notice what throughput looks like on a test that did not perform as well as the last example. All of the users log in and start working; once all users are logged in making request you would expect the throughput to flatline. But in fact, we see it plummet. This is not good.

Bad Throughput Chart

As I mentioned earlier, throughput behavior like the example above usually has to do with a bottleneck. By overlaying the throughput chart with a HP Diagnostics ‘J2EE – Transaction Time Spent in Element’ chart, we can see that bottleneck appears to be in the database layer:

Bad Throughput Chart with HP Diagnostics

In this particular test, requests were being processed by the web server, but in the back end work was being queued up due to a database issue. As additional requests were being sent, the back end queue kept growing, and users’ response times increased. To learn more about HP Diagnostics check out how I configured LoadRunner to be able to get these metrics in my video: HP Diagnostics – How to Install and Configure a Java Probe with LoadRunner

What is Throughput Recap

To recap: Throughput is a key concept for good performance testers to understand, and is one of the top metrics used to measure how well an application is performing. I've also written some other post on other concepts that a performance test engineer should know about:

Extra Performance Testing Awesomeness for your Ear Buds
For more detail info on performance testing make sure to grab a copy of Performance Analysis for Java(TM) Websites.

Also make sure to check out some interviews with some of the biggest names in performance testing to discover even more performance testing awesomeness.


  • Eric Proegler: Performance Testing in New Contexts


  • Michael Sage: Continuous Performance Testing with BlazeMeter


  • Scott Moore: Getting Started with Performance Testing and LoadRunner


  • Mark Tomlinson: A Bucket Full of Bottlenecks

performance testing concepts - simultaneous users