Performance Testing

6 Performance Testing Mistakes Newbies Make (How to Avoid Slipping Up)

By Test Guild
  • Share:
Join the Guild for FREE

Performance engineering can be a challenging field to get into. But it’s a very lucrative career, and people shouldn’t be too scared to get involved. If you’re a test engineer and are looking to learn performance testing, be assured that it’s a great career that’s going to open a lot of doors for you.

However, if you’re just getting started there are a host of common issues that most newbies (and some more seasoned performance testing engineers) encounter.

That’s why it was great to speak with a performance testing industry expert — Rebecca Clinard, founder of PerformanceWisdom — about the top performance testing mistakes most newbies make when performance testing.

080715_1215_InternetofT2.png

Check More than the Status Code

When you’re performance testing and automating a transaction, one of those common mistakes occurs when an http status code of 200 comes back. You might think everything is ok at this point but what happens if it was a login test and your login script enters a bad password—basically you would be it redirected you back to the login page.

That login page was delivered. It was http status code of 200. However, that transaction was not successful.

To avoid this mistake, you should definitely place validations or assertions — which are basically string verifies and responds — to ensure those transactions are indeed successful.

Believe it or not, it happens more often than you would think. For example, Rebecca spoke about a performance test she heard about that was against a huge e-commerce company.

Everybody was high-fiving when the performance test reached 40,000 current users without any issues. As it turned out, however, none of the user were actually logged in! They had forgot to make sure the users were successfully logged in before they proceeded with the rest of the scripts and transactions.

Remember to always have your validations and assertions in the right place to avoid this rookie mistake.

Think TimeThink Time

Think Time Matters

Another common performance testing mistake has to do with using improper pause and think times. People forget either forget to add them, or else they use two milliseconds. You really need to create a realistic performance test scenario that emulates how a real user is going to interact with your application.

You should look at the tool and say, “Okay — is it going to take five seconds to make a decision on this page? Is it going to take a minute to read this article?”

This can also cause people to create unrealistic results that will in turn cause the team to panic for no reason, so avoid this mistake at all costs.

Don’t Cry Wolf

Crying wolf when you spot a bottleneck will get you into trouble fast, and often happens when you isolate a symptom instead of the root cause. This is one of the performance testing mistakes that drive me crazy.

This goes back to more methodical performance testing, where you create a test that ramps up more slowly so you can see the trends and throughputs, and can see the servers becoming busier and busier. You can tell what server becomes saturated first, and rather than saying, “This is now at 100% use,” or, “This is out of memory,” you should be aware that these could just be symptoms.

If else

False Assumptions

Another error both Rebecca and I see time and time again is when customers make false assumptions while a test is running.

During the running of a test, the tool is doing its job. It’s collecting the KPIs and the data. When it’s presenting that data to you, however, it can look a little skewed.

In order to identify trends and really understand what’s going on, you should let the test run to completion. If you get a very high error rate after completion, that’s the time when you should stop it, then go back and do your analysis.

Analysis is actually one of the most important aspects of a performance test — but it’s also one of the most difficult tasks.

Analysis is Important – Wait Until the Test is Done

Rebecca actually created a course to help you perform your analysis correctly; working smarter — not harder — and understanding what really happened during your test.

If you attempt to analyze a test and arrive at conclusions while it’s still running, more often than not you’ll only be wasting your valuable time

binary-code-507785_640

Beware of Sinkholes

Back when I was doing mainly performance testing, I would sometimes go to the extreme; I would say to the team, “I’m not sharing any results until I get three runs and I’m able to analyze them.”

Most people don’t understand the performance engineering process; they want results immediately.

Rebecca agreed and mentioned that she does the same thing – runs a test at least three times before she reports the results, because she doesn’t want folks wasting time on anomalies that aren’t reproducible. More often than not, that’s where the sinkholes are.

Newbie performance engineers may find themselves spending hours and hours researching an issue when it was simply a fluke that happened once and wasn’t reproduced.

More PerformanceTesting Awesomeness

These are just a few of the most common performance testing mistakes most newbie testers make. For more check out my interview with Rebecca:

Joe: Hey Rebecca, welcome to TestTalks.

 

Rebecca: Thank you. Thank you for having me.

 

Joe: Before we get into it though, could you just tell something more about yourself?

 

Rebecca: Sure. I have been in the performance engineering industry for 16 years. I used to be a UNIX systems administrator. I moved into this career when a startup company basically called me up and wanted me to do performance engineering for their application. Since then, it’s been a great career. It’s a very lucrative career, challenging and there’s really a lot of demand for performance engineers that are pretty low supply, so doors are always open.

 

Joe: Awesome. That’s a great point you just brought up. For some reason, it’s really hard to find engineers with the skills you need for performance testing. I’m just curious to know, I got into performance testing by just happening to be doing test automation and just happened to see a load running tool and just learning it. How did you get into performance testing or why did you choose performance testing as a profession?

 

Rebecca: Yeah, so again when a startup company called me up and wanted some performance testing done for their product, I … looked brand-new to me. I was a UNIX system administrator so I definitely understood the resource consumptions on the infrastructure. I understood how web servers works, application servers, database servers. Getting up to speed on performance engineering where you’re automating a realistic workload was definitely new to me.

 

Once I made that jump and I started to create performance test harness, I realized the value. In other words, people can load test a website and understand the scalability limitations without having to find out those limitations in production. You’re right, there aren’t enough people that actually do this in the field. I think the reason being is there isn’t really any formal education out there on performance engineering. It’s all kind of trial by error which is I’m helping the industry in that way as well.

 

Joe: Awesome. I actually came across your name in the list of contributors to the performance engineering summit. What was that summit all about? Was it about how we can educate more people about performance or can you just tell us a little bit about the summit, what your takeaways were from it?

 

Rebecca: Yeah, exactly. It ended up being a working session on educating the next generation of performance testers. Like I just said, there’s a lack of education out there. What we did was we white boarded a bunch of different ways to come up with a curriculum really to bring … If you come from a QA background or a systems administration background or whatever your background, but really identify the key skills that you’re going to need when you transition to performance engineer and also teaching them a methodical way of performance engineering.

 

Right now, there’s a lot of or most of education out there is based on a tool so by teaching the core methodologies and processes, you can apply that to any tool. That’s basically what we did and we all had certain sections we contributed on. We’re going to meet again and the stuff will be published probably within a year.

 

Joe: Awesome. I think it’s a great idea so I was really excited when I saw that. What would you say to someone that’s in QA and they want to expand their role a bit more and they want to get in to performance testing. I know this is … I’ve heard from some people that sometimes a QA person may not make a really good performance tester. It’s just a different set of skill. Any tips you can give to someone that wants to make that transition from say a QA automation engineer into performance?

 

Rebecca: Yes, definitely. As a QA person, the first thing you'll need is curiosity. You want to understand really what’s going on behind the scenes, typical front end functional QA testers are looking at websites, mobile applications. They’re clicking, they're testing for bugs or are testing for user experiences. Well, performance engineering takes it deeper. The first thing to do is get really curious about those transactions. In other words, what are they hitting in the back end. Is it a database call? Is it logic being executed on the application server? What parts of the transaction responses are static, which are dynamic?

 

To do that, the very first thing is to hook up a sniffer like a network sniffer, like a fiddler or using Firefox, firebug or chrome network inspector. It really doesn’t matter. You don’t look at the visual of the transaction anymore. You look at the post that gets the API calls, the traffic. That’s where you’re really going to understand the transactions. A lot of the tools out there, they come with recorders where you can record the transaction and play it back. That is not realistic and it’s not all the information that you need. All the people that start, they seem to just record transactions, play it back. Sometimes, they get errors but it’s deeper than that. It’s understanding the end to end transaction flow behind the scenes of that transaction.

 

Joe: Once again, I think it’s another great point. A lot of times, people get a tool where they can put a load on the system so they just throw a thousand users with no think time in it whatsoever. Of course they bring the system to its knees and then people stop panicking. How do someone learn how to really create a scenario where it’s realistic to performance test where they look at transactions over time, anything you recommend in that regards?

 

Rebecca: Once you have the transactions, a typical user workflow and it’s playing back correctly, the next step is to make that scenario realistic. What I mean by realistic, you need to get those scripts to act in a behavior of real human beings. Like you just said, a lot of people [inaudible 00:06:11] a test and bring an application down to its knees with a very low number of users because they’re not taking the next step and saying, “Okay, I’m looking at a transaction and I’m about to read an article or add something to a cart,” so you need the beats, you need the pauses and the think times, like you just said actually, to create a realistic throughput.

 

If that application is already deployed into production, you can use the weblogs to say okay how many items are added to a cart per minute, and you can reproduce that transaction flow in your load test case scenario. Having the right throughput is what leads to accurate results and representing and so you can understand the scalability and really when you need to add make configuration changes or add more to the infrastructure to support the load.

 

Joe: Great. I guess along the same lines, you’re doing a performance test, a lot of times when I see people struggle with this to know what to monitor, they monitor so many things that they're like I don’t know what any of these means. It seems almost like you need a lot of different people with different skills looking at this output. Are there any key performance indicators that you always look at when you’re doing performance test?

 

Rebecca: On the front end, every tool is going to have key KPIs built in. The key KPIs are the throughput, the TPS and the response time and the user load. With your tool, you’re going to gather key KPIs to say, okay at this throughput, throughput started to plateau. Response time started to creep up and what user load was that and you correlate all three KPIs to understand the scalability of the application.

 

Now, when you’re monitoring the back end, you want to hook up every single server in the deployment. Maybe you have a cluster of servers, for application servers and you have web servers which are load balanced. Let’s say you only have one database server. The key KPIs and every single VM or machine of course, CPU usage, memory usage, I/O, disk usage, and then you want to hook up the key KPIs for technology, the servers, the software servers that are running on those machines.

 

For a web server, a good KPI is active request. A really good KPI is anything that’s queued, queued requests. I always like to look at queued KPI because that tells you, you reached throughput where it could not handle it anymore and request are waiting in line. Application servers, if it’s a Java app server, you always want to look at the memory of the Java Heap.

 

In other words, is it all being used? Is it constantly garbage collection, collecting? If you’re using any kind of messaging systems, you want to look at queued messages, active messages in, messages out. For database servers, longest running queries, again any queued queries. Basically, you’re looking for free resources and understanding if anything is becoming a bottleneck in the infrastructure. The key is is when you’re doing a load test, and you ramp up the number of users, you want to understand what becomes saturated first because if you identify a saturation point where something gets queued in a server but you’re not looking at the entire infrastructure end to end, that could be a symptom, and not a root cause so having timetables, graphs to overlay into your results of your performance test makes it very much easier to analyze and identify the root causes.

 

Joe: How do you know what is a symptom and what’s not a symptom, it's just making sure you have all the information that you have? A lot of times, once again, a lot of people jump to conclusions based on a limited amount of information. Do you recommend people run a test multiple times and then get the average of that to find out what’s going on or … What’s your approach to that?

 

Rebecca: Right. A methodical approach to if you ran a test and you see in your infrastructure, there are multiple saturations. Response time is off the scales, unacceptable and throughput is plateaued. Honestly, what I do at that point is I take it back. I say, “Okay if things went bad at 1200 users or even 12,000 users, I’ll run another test but as you ramp up to that number of users, you go a lot more slowly.”

 

This helps you to identify the root cause. If you ramp up to 10 users every 20 seconds and you methodically ramp up you’re going to ramp up more slowly. You’re going to see what became saturated first in the timeline in the graphs. If you ramp up too quickly, ramping up quickly definitely gives you information because it tells you when things fell apart but if you are going to diagnose a bottleneck, you need to ramp up a lot more slowly to understand what happened first. It's always what happened first is the root cause.

 

Joe: Awesome, yeah great advice. It’s been a while since I’ve done performance testing. I actually started off earlier in my career to do mainly performance testing. I’m curious to know how it's changed. Back in the day, it was more a waterfall approach where once again, application was already built, now go performance test while it’s too late. I used try to encourage that as early as possible just try to test out different configurations before they actually make decisions on architecture. Nowadays, what are you saying? How does performance play into agile teams that we’ve been seeing more and more of?

 

Rebecca: Yeah, I mean I’ve read a lot too about continuous integration performance testing early. I'm more in the professional services area. What I do is when I get approach to performance tested application, I’d like to test on an environment that closely mimics production at both configuration and hardware. To be honest, and where I am in this industry, I don’t get involved in the beginning.

 

When I’ve been in Agile group working full-time for companies, what I’ll do is I’ll identify performance issues in production and write stories for those and put them into the flow for the Agile team. Continuous integration, I understand using Jenkins and performance testing early but here’s my take on that, I think you get the most value out of performance testing when you have realistic users actively going to transaction flows all at the same time because if you have 2 business transactions that are sharing a resource you’re going to see the competition for resources.

 

If you are on performance test, a single transaction let’s say the code was just developed by a developer and was deployed on a QA environment that’s an eighth of the hardware that’s in production, you’re going to get kind of silo effect. You’re going to be able to say okay this is a resource usage. You definitely can get some great data from it but the value that I see is when your performance testing everything together and in a production like environment. There is where you’ll get the scalability numbers and the accurate numbers.

 

Joe: Once again, I definitely agree. That was one of my pet peeves is we've spent all this time doing performance testing and what we thought was environment just like production and it always turned out once we got into production, it wasn’t like production. We end up just running an off hours in performance test that weren’t really destructive to get that type of information. I don’t know if that’s the same type of struggle that people still have of performance testing. That to me seem like one of the biggest pain points with performance testing, actually getting a realistic scenario and then running against what is a realistic environment.

 

Rebecca: Yeah, exactly. I mean if you’re going to spend your time and running a test, you want to get some value out of it and not just check off a box, okay I ran this new transaction on this environment and this is the response time. Is that really relevant information? I don’t know. Sometimes it is, sometimes it isn't.

 

Joe: Great. Also back in the day, I used to run a tool called the wind runner within the load runner controller to get what I call the experience from a real user. I was actually just measuring refresh rates on the screen. A lot of times, back in the day with performance testing, you just send a request, you get a response back and that ended, but it could be a lot of processing still going on in the user’s computer. Once again, it's determined by the user’s configuration and things like that.

 

Do you still see those types of things? Do you have any workarounds for that to get here? I always get confused now of performance testing is I always think of client/server. I don’t necessarily think of the front end but a lot of times now with this JavaScript front ends, it does actually, there is a performance sometimes so any ideas or tips on how to actually capture a client side performance type number?

 

Rebecca: Yeah. About two years ago, I ran across an application that was really heavy front end JavaScript and not so much and that’s where the time is being spent on the front end JavaScript. To be honest, a lot of the tools out there, they're excellent. They create real user sessions, unique user sessions. They automate the throughput but as far as capturing response times on the browser for a code like you said executing in the browser itself or let’s say it's a mobile application the code executing on a mobile device.

 

What we’re doing now is called hybrid test where 90% of the load is driven via the network which is a realistic workload and then 10% can be a user load. It can be done in a number of ways, a real user load or I should say a [browser real 00:16:18] user. Selenium is a good solution for that. I used TruClient. The company I was working with had the funds and they were able to purchase that protocol. It’s pretty slick and quick to set up.

 

Again, it’s very expensive though. Then, a company that I’m working with now chooses to have the real user experiences be real users with Fiddler running on their machine. We are running thousands of users during the automated load tests via the network protocol. Then they just have a couple users clicking around at the same time and capturing response times. I’m really looking forward to a new tool out there that’s going to be kind of integrate the two even more seamlessly but for not as much, not a lot of a high price tag

 

Joe: I know that you also have a website dedicated to performance testing.

 

Rebecca: The website is performance engineering wisdom. Basically, I started it to help educate. I wanted to share my 16 years of experiences, mistakes and all, so that other people can learn performance engineering more quickly and not spend so much time figuring it out. It’s a tool agnostic website too, so you can apply those core concepts to any tool. Also, I created a blog on very specific issues and challenges of performance testing and how to solve them.

 

Joe: What I really got excited by is I saw a course that you have on data science behind analyzing performance test results which is tool agnostic which I would have died for when I first started. What's this cause all about? How could someone benefit from this if they want to learn more about it?

 

Rebecca: Yeah, that course was my heart and soul. It took me probably 300 hours to trade the content. It’s about 40 to 50 minutes. It’s a video that you just sit and watch. The course goes over, it goes over analyzing performance test results. First, what it does is it defines performance engineering versus performance reporting. I made that definition because when you’re in professional services or you work for company and you performance test and you isolate, okay this system or this application can support 23,000 concurrent users and that’s called reporting.

 

Performance engineering is well why, why does it only … or what happens… What do you need to change in the environment to support a higher number of users? What I do in the course is I go into performance engineering and how to set up a performance test to answer those questions. A lot of gotchas, stuff that you can get tripped up on, methodologies where you reproduce the results, how to analyze tons of data to understand scalability and what to look for. I call it clear the clutter because you get overwhelmed when you run performance test. There’s so much data to look at afterwards. I say okay this is really the only things you really need to look at.

 

Also in the course, I show you like how to create targeted workloads to answer your questions. If you have a specific question, response time and why this creeps up, you can design tests that clearly answer your questions. Another thing about analyzing performance test results is there is certain ways you can analyze it where it makes it a lot easier. You clear the clutter and I say you reduce the noise, you work smarter, not harder. You identify the key KPIs that are going to lead you to answering the questions that you need answered. I’ve taken a lot of notes over the last few years and I just put them all together and created this course that help people with the learning curve and really cut to the chase and talk about gold mines, understanding.

 

One thing I do covered in the course which was interesting is why tools are great and you need them to run performance test but analyzing the results is a human process. No tool is going to automate that for you. They can give you … Here’s some correlations or here’s some this but some information but it really takes a human to analyze results and makes sense of it all. Yeah, again, that course is really my heart and soul of performance engineering.

 

Joe: It sounds awesome. Hopefully, it's selling like hot cakes because I think this is maybe one thing that we can help with people that want to get into performance testing education is point them towards a course like this with someone that has 16 years of experience with what I think is one of the most difficult pieces of performance testing actually, analyzing the results so it sounds like a great resource.

 

Rebecca: Yeah, I totally agree. Analyzing results is where the skill set is totally needed in this industry, setting up test, executing them but analyzing. I manage a team of performance engineers right now for another company. It’s the analysis skills that get the most value.

 

Joe: Absolutely. It’s so critical because your business might be making decisions on this. I think you mentioned something almost like about capacity planning where what happens if we have to … It becomes the holiday and we need to cover 20% more traffic. They are actually making money based decisions based on this information so to be able to actually look at all this data and give them valuable analysis of what’s going on so they can make a valid decision of what to do with it I think is critical.

 

Rebecca: I totally agree. Actionable results is what you want to get from a performance testing engagement.

 

Joe: Do you have any tips on how to make … You’ve analyzed the results, now you have a report that you’re going to present to an audience that is not very techy that are actually going to start making business decisions based on this. Any tips on how to present data so that it's more consumable or more user-friendly?

 

Rebecca: Yeah, definitely. You want to back up your results and you’re going to have the graph in there from the tools and all of that but you definitely want to have an executive summary that everybody can understand. You want to say for example under the current configuration and hardware clearly state what the scalability is, the current configuration of your application, where the ceiling is.

 

Perhaps you're expecting a headroom of 25% more traffic over the next year. Are you going to be able to handle it? Answer those questions and then have the data to back it up with. Also, if you’re also monitoring the environment, the actionable information that you’re delivering is what is limiting the current scalability.

 

Let’s say an application, the goal was to reach 10,000 concurrent users with the headroom let’s say up to 15,000 concurrent users for the future and it blew it away. Everything was absolutely fine. However you still want to be able to say, okay heads up on this tier of your deployment because from our testing and from the trending, it looks like this will become saturated first as you increase the load beyond this initial target workload. Another one is write it in English, don’t just have the graph, give the information.

 

Joe: Rebecca, you’ve been doing performance testing for 16 years, is there anything you see over and over again in your experience that most people doing performance testing are doing wrong?

 

Rebecca: I think it’s interesting in the last couple years, what’s been mostly wrong well I shouldn’t say mostly wrong, but a lot of times is the clients don’t understand. They know they need performance or their web application or their mobile application performance tested but they don’t really know what they need tested. A lot of our responsibilities kind of educating them with performance testing and saying okay, we can performance test transactions which can be automatable and something that can be tested over and over again.

 

For example, if they have application that’s very heavy on data and relying on data and very sensitive data and you can only execute that transaction ones using a piece of data, you can see where a lot of the time, we'll be consumed with the generation of the data and the managing of the data. Educating prospects and clients to say okay following a typical user transaction flow, one of the transactions that we’re going to automate here. Another thing is they might come to us and say okay we need our website to support 100,000 current users, but we also need to understand throughputs.

 

In other words in a minute or in an hour, how many thesis is or whatever the application is are going to be uploaded. It’s not … The requirements aren’t just based on concurrent users that they expect in production but also the throughput rates of their business transactions. Lately, a lot of the things that have gone, that needed more defining and stuff are educating the client. Now performance engineers, I would say I wrote this blog and it’s called … It’s on my website, it’s called the top 30, I think I said 30 rookie mistakes for performance engineers. It got a lot of traffic. It’s basically listing everything that rookies do wrong or not even rookies. Sometimes we all still make mistakes obviously.

 

Joe: Cool. I know we can’t go through all the 30 of them, but can you give us a few of the performance testing rookie mistake that you see most folks making?

 

Rebecca: When you’re performance testing and you’re automating a transactions, a rookie mistake is that a status code comes back, an http status code of 200 comes back and you say, “Okay, well that means it was successful,” but let’s just take for example that was a login and a login have the wrong password so basically it redirected you back to the login page. That login page was delivered. It was http status code of 200. However, that transaction was not successful. So definitely place validations or assertions which are basically string verifies and responds to make sure those transactions are indeed successful. That happens.

 

There was a performance test I heard about that was against a huge e-commerce company. Everybody was like high-fiving, it reached 40,000 current users excellent test and none of the users actually logged in because they forgot to make sure that users are successfully logged in before they proceeded with the rest of the scripts, the rest of the transactions.

 

The next one is actually you had already mentioned too is pause and think times. People forget to add them or they use 2 milliseconds. I think it’s two seconds because every tool is set up differently, you really got to look at the tool and say okay is it going to take five seconds to make a decision on this page, is it going to take a minute to read this article? So unrealistic loads because you forgot to add think and pause times. That’s a rookie mistake that a lot of people use.

 

Some other rookie mistakes is crying wolf when you spot a bottleneck. When I say that is I mean that you isolate a symptom and not the root, root cause. This goes back to more methodical performance testing where you create a test that ramps up more slowly so you can see the trends, you can see the throughputs, you can see all the servers becoming more and more and more busy and what server becomes saturated first and rather than saying, “Oh look this is at 100% use or this is out of memory,” no, that could be just a symptom.

 

Another one I see time and time again and the customers actually drive this one, this is where again we have to educate customers, but don’t make false assumptions while the test is running. While the test is running, the tool is doing its job. It’s collecting the KPIs and the data. When it’s presenting it to you, it can look kind of skewed. In order to identify trends and really understand what’s going on, let the test run to completion or when you get a very high error rate, stop it and then go back and look at and do your analysis.

 

That’s where my course talks about too is doing analysis the right way and working smarter not harder and understanding what really happened here. If you try to analyze a test and make conclusions while it’s running, more often than not you’re really wasting your valuable time. Another rookie mistake would be when you create a script, you want a user to login and then they’ll do certain actions and then they’ll log out. In a real case scenario, a user will log in.

 

Let’s say an easy one, like they’re adding to a shopping cart, they're going to add a bunch of stuff in the shopping cart. They’re going to check out and they are going to log out. Now, a lot of times what people do is they’ll create a script, but logs in, adds something in the cart, logs out. They don’t really reiterate on the body of that script.

 

Therefore, you have an unrealistic login and logout rate and not much throughput on the business transaction side. When you’re creating these performance test scripts that [inaudible 00:30:48] real users, make sure you put a loop in more commercial tools like I like NeoLoad for this and for a lot of other reasons too but basically they’ll have … They’ll label it [inaudible 00:30:59] which is log in and end is log out. Then they will repeat on the action on the body of it without even having to drag in a loop. If you’re running like an open source tool, and you got to drag in a loop and make sure you do that yourself to create a realistic throughput and workload on that application.

 

Joe: Awesome. One of them that really struck home to me is when you said not crying wolf. A lot of times I’d be so extreme where I would say I’m not sharing any results until I get three runs and I’m able to analyze them before I send it because I have for some reason, overeager managers always looking over the numbers. I'm like hold up, let’s make sure what we’re seeing is really happening before jumping the gun.

 

Rebecca: It’s so true, I mean, people, they don’t understand the performance engineering process, they want results immediately and clients, managers and everything else. If you say okay if I run a test and I do the same thing, I do it three times at least before I report in results because we don’t want anybody wasting time on anomalies that aren’t reproducible. A lot of times, that’s where the sinkhole is. They find out they spend hours and hours researching this issue when it was a fluke, it happened once and it wasn’t reproduced. Infrastructures are like that.

 

Joe: Thank you Rebecca. Before we go, is there one piece of actionable advice you can give someone to improve their performance testing efforts and let us know the best way to find or contact you.

 

Rebecca: Okay, so the one piece of advice I have is relates back to what we talked to is do not be in a rush. Don’t be in a rush to slap together performance test and get results. Spend your time on those scripts, and making sure that you understand what’s going on. You understand the business transaction flows in the infrastructure and then start off with a low, low load and then increase it methodically. Take your time. Don’t be in a rush because you're going to get the most valuable information if you do it methodically and with patience.

 

To get in touch with me, you can always email me from my website, performanceengineeringwisdom.com or you can email me rebecca.kleiner@gmail.com. I just want to say that performance engineering, it’s a challenging, it’s a very lucrative career, and people shouldn’t be scared to get involved. We need more people in the industry. We’ve got people like me that are willing to teach you. It’s a great career that’s going to open a lot of doors for you.

 

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

6 Must Run Performance Tests for Black Friday

Posted on 11/21/2024

Regarding e-commerce, Black Friday is the ultimate test of endurance. It's one of ...

Unlock the Power of Continuous Performance Engineering

Posted on 03/25/2024

With the current demand for software teams to high quality product in a ...

16 Top Load Testing Software Tools for 2024 (Open Source Guide)

Posted on 03/12/2024

In the world of software development, testing is vital. No matter how well ...