Throughput is one of the most misunderstood performance testing concepts new testers sometimes struggle with. So what is throughput?
Basically, “Throughput” is the number of transactions produced overtime during a test.
It’s also expressed as the amount of required capacity that a website or application can handle.
Also before starting a performance test run, it is common to have a throughput goal that the application needs to be able to handle a specific number of requests per hr.
(FYI I originally wrote this article in 2011 but its the principles I cover are timeless)
Let's Imagine Throughput in the Real World
For example: Let’s imagine that a gas station attendant fills up a car's gas tank using a gas pump.
Let’s also say that it always takes the gas attendant just one minute to fill up any car, no matter how big it is or how low the car's gas tank is.
Let’s call this gas station “Joe’s Gas,” and envision that it only has three gas pumps.
Naturally, if we have three gas pumps and three cars, it follows that Joe's attendants can only fill up three cars per minute.
So, if we were to fill out a performance report for Joe's gas station, it would show that Joe’s throughput is three cars per minute.
This is Joe’s dilemma: no matter how many cars need gas, the maximum number that can be handled during a specific time frame will always be the same –three.
This is our maximum throughput; it is a fixed upper bound constraint.
As more vehicles enter the gas pump line they are required to wait, thus creating a queue.
It is the same concept that applies if we are testing a web application. If a web app receives 50 requests per second, but can only handle 30 transactions per second, the other 20 requests end up waiting in a queue.
Throughput in performance testing is often expressed as transactions per second or TPS.
Real Performance Testing Throughput Results
I use HP's LoadRunner (which comes with a throughput monitor) for performance testing.
But other tools like jMeter have similar meters. In a typical test scenario, as users begin ramping up and making requests, the throughput created will increase as well.
Once all users are logged in and processing in a steady-state, the throughput will even out since the user load test stays relatively constant.
If we wanted to find an environment’s throughput upper bound we would continue increasing the number of users. Eventually, after a certain amount of users are added, the throughput will start to even out, and may even drop.
If the throughput enters this state, however, it is usually due to some kind of bottleneck in the application.
A Look at Typical Throughput Results
Below are the LoadRunner throughput chart results for a 25-user test that I recently ran. Notice that once all 25 concurrent users are logged in and doing work, the throughput stays fairly consistent. This is expected.
Now notice what throughput looks like on a test that did not perform as well as the last example.
All of the user's log in and start working; once all users are logged in making requests you would expect the throughput to flatline.
But in fact, we see it plummet. This is not good.
As I mentioned earlier, throughput behavior like the example above usually has to do with a bottleneck.
By overlaying the throughput chart with an HP Diagnostics ‘J2EE – Transaction Time Spent in Element’ chart, we can see that bottleneck appears to be in the database layer:
In this particular test, requests were being processed by the webserver, but in the back-end, work was being queued up due to a database issue.
As additional requests were being sent, the back-end queue kept growing, and users’ response times increased.
To learn more about HP Diagnostics check out how I configured LoadRunner to be able to get these metrics in my video: HP Diagnostics – How to Install and Configure a Java Probe with LoadRunner
What is Throughput Recap
To recap: Throughput is a key concept for good performance testers to understand, and is one of the top metrics used to measure how well an application is performing. I've also written some other posts on other concepts that a performance test engineer should know about:
- What is resource utilization?
- What are concurrent users?
- What is the Response time?
- Performance Testing Basics – Four Steps to Performance Nirvana
Extra Performance Testing Awesomeness for your Ear Buds
For more detailed info on performance testing, make sure to grab a copy of Performance Analysis for Java(TM) Websites.
Also, make sure to check out some interviews with some of the biggest names in performance testing to discover even more performance testing awesomeness on our TestGuild Performance & SRE Podcast.