How to Automate Your Performance Test Results with Joey Hendricks

By Test Guild
  • Share:
Join the Guild for FREE
Joey Hendricks TestGuild PerformanceFeature

About this Episode:

Continuous performance testing is nothing new, but one of the biggest pitfalls of a reliable automated performance test is the manual analysis of its results. In this episode, performance testing expert Joey Hendricks will share his approach to automating much of the analysis of your runs. Discover how to embed an automated analysis approach in your testing process to perform complex comparison analysis in an automated fashion reliably. Listen up!

TestGuild Performance Exclusive Sponsor

SmartBear is dedicated to helping you release great software, faster, so they made two great tools. Automate your UI performance testing with LoadNinja and ensure your API performance with LoadUI Pro. Try them both today.

About Joey Hendricks

Joey HENDRICKS testguild

Fueled by a staggering amount of caffeine and dumplings, Joey has for the past three years embarked on the noble quest to protect software applications from pesky performance problems.

Connect with Joey Hendricks

 

Full Transcript Joey Hendricks

[00:00:01] Welcome to the Test Guild Performance and Site Reliability Podcast, where we all get together to learn more about performance testing with your host, Joe Colantonio.

[00:00:16] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild Performance and Site Reliability Podcast. Today we'll be talking with Joey all about how to automate your performance test results analysis. Joey is fueled by a staggering amount of caffeine and dumplings, and he has over three years of experience helping teams on the Quest, protect software applications from those pesky performance problems. I know you all have the experience yourself and you don't want to miss this episode because you want to discover a better way to take all the raw results of your performance tests efforts and really make sense out of them. This episode is really going to help you. We're going to take some talking points we got from his session at this year's I think it's Tricentis now used to be noticed is a pack, and we're going to dive in a little bit more on how to help you actually analyze your results better. You don't want to miss this episode. Check it out.

[00:01:05] This episode is brought to you by SmartBear. Listen, though testing is tough. Investing in the right tools to automate tests, identify bottlenecks, and resolve issues quickly could save your organization, time, and money. SmartBear offers a suite of performance tools like Load Ninja, which is a SAS UI load testing tool, and Load UI Pro and API load testing tool to help teams get full visibility of its UI and API performance so you can release and recover faster than ever. Give it a shot. It's free and easy to try. Head on over to smartbear.com/solutions/performance-testing to learn more.

[00:01:47] Joe Colantonio Hey, Joey. Welcome back to the Guild.

[00:01:54] Joey Hendricks Thank you, Joe. Glad to be here. I do, I do love dumplings and coffee and a bit more higher caffeinated cola. So.

[00:02:00] Joe Colantonio So what kind of dumplings? Chicken dumplings? What is it?

[00:02:06] Joey Hendricks I love chicken dumplings. Whenever in the neighborhood of most it hit me up, and there are very good dumpling restaurants around here, so.

[00:02:13] Joe Colantonio I am a foodie, so I'm definitely, pick you up on that for sure. That's awesome. So, Joey, it's been a while since we spoke. I think it was last time episode 56. And it was about your open-source project quick potato anything new going on with that project?

[00:02:28] Joey Hendricks It's actually going great, for performance testing library for Python we talked about testing. It's kind of grown in popularity a bit. I got some good feedback on it has grown become technically a bit more better, which I like. So it's growing at a steady pace and when I have time, I added and continue working on it, which is a great little repository if you have any. If you ever want to look for a profiling tool for your Python project, don't forget to hit up Quick Potato and find it on my GitHub.

[00:02:57] Joe Colantonio All right, so at this year's PAC event, I think you spoke about the topic of how to analyze your performance tests, test results automatically almost. Any of some really cool things going on. Just curious to know what made you want to submit that topic?

[00:03:13] Joey Hendricks To be honest, at APG, we've been doing this for a very long time already. We religiously store all of our test results. We do that because you want to see trend lines and we want to do analysis on our data. So having a wealth of information available to you to analyze all of the multiple releases and look back and see what changed is amazing. And one of the things that really triggered me was to start thinking about how can I automate my result analysis was mainly that I was thinking about my robotic vacuum cleaner because I really love my, so my vacuum cleaner broke and I bought this new vacuum cleaner on the recommendations of a friend. It was like, these automated vacuum cleaners are great, and I was very skeptical because if you work with I.T. you know, it's never going to work the way you want, but it works absolutely great. I love the thing. It cleans my apartment. It keeps everything nice and tidy, and it only triggers me when I want to be notified, and the same as with my testing. I only want to be triggered when I'm notified. If there is no change and there's nothing wrong with the application, why should I look? Just let me go and let me do my other stuff, which I have plenty of other stuff to do, but I just want to be triggered when it's relevant. So that's kind of where this topic kind of came from. It came from a robotic vacuum cleaner, which I bought because my vacuum cleaner broke. So it was like a strike of, Oh, let's go.

[00:04:34] Joe Colantonio It's awesome. So I guess I always say this on the show when I was doing mainly performance testing really, really like 20, 20 odd years ago, I always found creating the scripts were a lot easier than actually analyzing the data. And so how do you get from that to actually utilizing some methods to help you automatically analyze this? So you only, you only like you said looking at it when, when you need to?

[00:04:57] Joey Hendricks Well, first, you know, everybody has I think most people that are already doing continuous performance testing, they have that testing process in place where the tests get automatically kicked off, it's in the pipeline. So it's part of that quality assurance process. And if you're already at that step and you need to export your, your raw results into a database or on a file share or however you want to do that, you already are at that point where you can continuously do that so you can be triggered after a release that goes through the pipeline. The performance test kicks off and then export raw results and compare them to the previous one because you stored everything. So if you add that point, you can start doing that, which is amazing. And if you don't want to continue and start analyzing it because you can just go back, you can just look at the previous tests that you've done and you have the new tests. So if the previous situation against the current situation and you compare them and you can use a lot of statistical tools that are available to us to start doing this. What I typically see what people do this is they, they compare means, they compare averages. So they just run SLAs over them like can't be hired in five seconds. If it's that, just kick off the test, stop the test that kind of stuff, which is which is OK because obviously, SLAs are important, but it's not actually something that we always want to be looking at. We are more looking at change and the difference. If there's, if there's something relevant for me to look at on the second notice sometimes SLAs are not always that well defined that you can really fail a performance test on them. Usually, they're much higher than the actual response time really is. For example, if your application generally responds within a second, then you have an SLA of five seconds, just highballing some numbers.

[00:06:43] Joe Colantonio So I'll be honest, I probably was a terrible performance engineer after seeing your session because all we really did was look at the averages and we even did like 95% average and we'd run it like three times. And if we had one that was really an outlier, we'd say, Oh, let's just ignore it because it's outside the scope, but you made me rethink that. I think you had a quote from one of your mentors as averages hide the ugliness of an application. Can you talk a little bit more of what that means? I think a lot of people, maybe they're like me, old school, where they always use averages for performance.

[00:07:14] Joey Hendricks I don't dislike averages. Let's get this off the boat after the averages are great because averages if I show an average line to a business owner, you know that makes more sense to them that that clicks. But when I, you know, we're technical people, so we want to see everything. And I have a great visualization of how that the impact of an average, it's all my GitHub. It's something that is also shared online on LinkedIn a lot and the test that I've done where a plotted the average line and then I plot over it the exact same data, I plot over the scatterplot. Like every single measurement. I see a way more interesting pattern starting to fold out. So the averages are great for reporting to stakeholders, but they are very difficult to just base your entire analysis on because if you have all the data available because generally, we have everything available, especially if you're running a load test on it, your load test tool will record everything. You can just export that and you can see the entire raw data. So I would recommend to you to check out my GitHub and just see the the actual visualization because I think that that visualization really speaks about the importance of raw data and staying in shape or you always says like, that it hides the ugliness of the application that's because of the averages, it starts to go and patterns start to be become smaller, less visible or even gone completely. And you can see that perfectly in that example that I have. And because of that, it hides a bit of how ugly the application really is, it makes it a bit more prettier than it really is. So for a business owner, it's OK to share it with but to solely base your entire analysis on just an average line, that's quite false to just go on a limb on that because you're not looking at the complete picture, which you should be.

[00:08:57] Joe Colantonio So I guess the challenges there could be if you have one outlier or a few outliers, it's hard to determine what actually caused or if it's a real issue. How do you know you're, you're not going down a rabbit hole at that point?

[00:09:10] Joey Hendricks I think it's also about, a bit about context. So if you know, if you know what you're doing right and you know how the system is behaving, if you generally get like insane outliers, then there is a problem. But like also, if I take APG for example, we have, we have customers that have just way more complicated calculations going on then, you know, your average Joe. Because of that, we have complicated customers, you have normal customers or you have very small customers and yet very complicated customers with a lot of history and a lot of problems. And so not problems, but more like a complexity around them. Because of that, it just takes longer to calculate them. So most low testing tools will also allow you to see which customer number you might be refused in a request. And then you can also verify if that outlier is a correct outlier. It could just be a very high outlier because you're just having complicated data behind that, and that is just doing a lot more work.

[00:10:06] Joe Colantonio Absolutely. So, you know, you did talk a lot about raw data, what is the raw data? Does it also have things like metrics from the different monitors we're monitoring you able to do correlation or is it just raw data like what is usually included in the raw data that you're sending to the database that you then use to create these graphs?

[00:10:27] Joey Hendricks Well, raw data is a general concept. It's basically that you take every measurement, so it depends on what you take measurements off. So usually for me, the most important thing is user experience. I really care about how people are experiencing our application and how it's performing. You know, I really care about that. I care less about CPU statistics. I care about them, but they're more a secondary measurement to me because if the CPU is blazing at 80%, but I'm delivering a stellar user experience with on under the load that I want, then everything is fine for me. I do care about them. You know, if you move to the cloud where you have to pay it, it's like a gas bill. The more you're going to use in CPU usage, then the higher your bill is going to be. So from that point of view, once you're on that kind of system, you also want to try to reduce that. But for just giving a good digital experience, which is why we built these applications now, we built them so we can provide our customers with an online service. We do want to give them the best experience possible, and if that is with a higher CPU consumption, then I can live with that as long as we deliver a good experience to them.

[00:11:33] Joe Colantonio So you brought up two points when you're doing analysis, it all depends on the quality of your data in your testing environment. So how do you make sure you have good raw data, and if that makes sense, but rather than just throw everything, is there certain?

[00:11:46] Joey Hendricks I think the question that you're trying to say is more like, how can you trust your raw data? Because.

[00:11:52] Joe Colantonio Yes, right, right.

[00:11:53] Joey Hendricks Raw data can be raw data. You can have raw data from an unstable test environment from stable. But generally, what you want to be testing is not how unstable your test environment is, but the change that you've introduced to it. So if you have an unstable test environment that goes like haywire every time you start a test and you get like, you changed nothing, nothing has changed. And now you see fun suddenly.  The average, for example, goes for 5 seconds to 7, 8 seconds and you've done nothing. Then there's probably something wrong with your testing environment. So before you can even start and also, that's also and start with continuous performance testing, make sure that you have a stable test environment and that reacts always the same way, only if only changes when you introduce a difference in them. I think that's mainly the point that you want to establish. You want to make sure that whatever you're doing in whatever change you introducing to your test environment, that that is what you're testing and that your environment will always remain stable. A good thing to test this outlet is just run 10 tests against your test environment and see if they look completely alike. Because if they look completely alike, the test results and they're exactly the same and the numbers are within the same ranges, then you're going to have a stable test environment. And that is kind of the cue to start continuous performance testing at the start, also trusting the raw data that flows out of that. It's the same with water. You know, you don't want to drink dirty water, so you make sure it's clean and it's the same with your test environment. You want to make sure that it produces clean test data, which you can trust.

[00:13:24] Joe Colantonio Cool. So I guess that would then would create a baseline. And then was that what you would use as your baseline that you would measure going forward that anything that runs after it if it's off the norm, you'd be alerted? Or how does that work?

[00:13:35] Joey Hendricks So the thing is, is that I care most about is change and positive change, right? We're going foster can be bad. It can be. That is very good. And we wanted a positive change. So aiming towards changing the system a lot by performance improvements that we've introduced, then we know when we are going to release that we are expecting a performance increase, but we are expecting a change. But it also could be that we are expecting a negative change, so negative and positive are both changes. And I care about looking at them. So even if the results of difference between two of my tests are positive. Then I still want to do something with that. I still want to hold the pipeline a look at it, even though it's good, even though it's positive and the same goes with negative, I only don't want to look at it as if it's the same. If it's the same as always and I don't care about it, then just go, there's no change, so there's nothing to really investigate there. So for a positive and both for a negative change, I want to hold that pipeline and I want to look at it manually. And then when I look at it manually, I'm only looking at it when it matters, because it's either that we have introduced a performance increase that right, we have better the system or the system unexpectedly became better and we have no idea why. Because that's also a problem. And if it slows down, obviously, I also want to look at it because then we also have a defect. So I really care about change. And if I can spot change and through these kinds of ways, that's great because there's a very lot of ways where you can achieve that.

[00:15:05] Joe Colantonio So to spot to change it, so it seems like you have this embedded into your pipeline. So you've run a test, it's automatically gone. Where does analysis and then it, it only triggers alerts or stops the pipeline based on criteria you put in, like what's the technology or tooling you're using to get that?

[00:15:20] Joey Hendricks Statistics.

[00:15:20] Joe Colantonio All right.

[00:15:26] Joey Hendricks So it's like, I think a bit of backstory in that I started that playing around with these. I don't have a statistical background, so when I was kind of starting with it kind of was a nitwit in it. I didn't really know what I knew what I was doing generally because I knew math, but not on that level. So I started researching a bit about how in statistics do people find change? So I started looking at them and they generally talk about distance and statistics. So they talk about what's the amount of distance between one distribution, basically a test against another distribution, a test, right? So if you have two performance tests, there are two distributions of data. So if I want to see how many distances then between those two performance tests? Well, there are generally a lot of distances that are rooted deeply into the statistics, and they are excellent for finding out what, what is, how much distance is there. And there are two which I really like, and I'm probably going to mispronounce this for the 15th time, but a kilogram of the square root of distance is one of them. It comes from a hypothesis test. So if you, if you remember that from high school math, flashbacks, helicopter or flying over, it's it's not fun, but they're very powerful. Distance matters. They're metrics in sense. And you also have the Wasserstein distance, which is also a very powerful metric. So these two metrics I kind of used to see okay, how much distance is to between two performance sets and the kilogram of this one interesting one because it depicts the maximum is basically the absolute maximum distance between two distributions aka performance test. And the Wasserstein kind of gives you the surface area between two distributions, so it gives you the general distance between two performance tests, and they will always be around the same numbers so you can classify them. So you can say if I have my baseline, which is this a line and then the benchmark needs to hook that line and have to be as close as possible. And for some visual representation, I would really recommend checking out my GitHub because I have this all in visual animations where they make a little bit more sense to understand them. But because those lines have to be very close to each other because we don't want distance, we want us to say the same. We don't want any change. And because of the Wasserstein and kilogram of they stay in general in the same numbers, if it's very low because they're going to stay near zero and then you can actually classify them so you can say 0 to 10 is low, 10 to 20 is medium, 30 to 40 is high and so on. And you could put a letter rank next to that to say, OK, if those numbers are those distance metrics are around these numbers, then that's an S or that's in, like if you take the Japanese letter system where S is super A-plus and you don't take the A, is that good and so forth, right? And you can give them a rank them and then you can steer on a rank saying how much positive or negative change has been introduced. Those numbers, those statistical distances, can be used for that kind of stuff. They just automatically verify ester an interesting change between two tests. So this can become very technical because I'm skipping some parts of it, which I recommend also reading on my readme because there's a lot of other very complicated stuff like criminal distribution functions, the normalization of the data, all very important factors to consider. There's obviously a lot of Wikipedia articles behind that where the calculations look completely elvish and difficult. But there is a wealth of information, especially if you're trying to automate and make smart decisions around your data.

[00:19:12] Joe Colantonio Nice, so I guess, how much statistics does a performance engineer need to know though? Are these codified in some sort of function that someone could just use? Or do they actually have to program it up in their pipeline to actually use these different, statistic methods that you just mentioned?

[00:19:27] Joey Hendricks Well, normally, if you're in the pipeline, you already have some ability to program in the pipeline so you could use Python Python has like and now you have the site by package for Python, which is basically every statistical calculation that you could possibly want in there. So there's no need to actually understand this from a math point of view. So you don't really need to understand how does this equation actually works? But you do have to understand how what does this number represents? You know, what does it mean? And for the Wasserstein, that is then the general area between two tests and the Kolmogorov-Smirnov statistics or the KS  value to make it a bit easier on that to make it a bit easier on me to pronounce it. So the KS value is more the absolute distance between two. So what's the absolute maximum distance between the two tests? So they're great metrics to tell you. Is this is change big or small? Is it concentrated in one space and you can make smart decisions on that? And you can supplement those decisions with stuff like just checking if the average is slower or faster. So you can determine if your tests are running faster than the previous one or slower than the previous one and making decisions based on that, you can start checking the throughput levels if you're still taking in the same amount of throughput. You can check the errors, maybe some, I think you can start also do like a secondary layer where you put your SLAs over it. So if there's difference and it doesn't fit the SLAs, then we have the journey problem and you can start supplementing those metrics, like with other stuff and you can rank those. And I called in science have this word for this, and I love this word, and it's called a heuristic. So it's basically a set of criteria that if it really meets those criteria, then it's good and a heuristic is just a way of making very complicated data like your performance results right or your entire application, and to make very complicated stuff like that. Understandable with just a rule of thumb. So if we know that the Wasserstein is this big, we know that that is an interesting change for application. If the error rates are done out of these number ranges, we have a problem as well if the throughput goes below a certain level and we also have a problem. If the average is lower or higher than this, then you can take some percentiles and good boundaries around that as well, or the percentage change between two percentile. So you can really create what you really want. You can really tune this to how you want, which decisions you want to take around your data. So it depends have been on the context of the application as well, how well this works? For me, this is working very well because I have a very stable test environment. I, my numbers are kind of in the same area and that allows me to actually rank them from S to F and then to tell a business owner, OK, the change that the team has introduced was an F, but a positive one. So they made a massive improvement to the system more, it's a B. So I would recommend not to release because it has a low impact on the system and it produces more errors. Something like that. I can say that now. And for a business owner, a B, an S, an F, a C is more understandable. Then. here are all these numbers and they'll be looking at this like information overload for these people. They just want to type, an easy understanding, understandable metric, and we can also,  in the pipeline, we can also make decisions with those ranks. If I have an F, I do this, I create this defect to this team or, or I do this. So you can also focus this kind of analysis on every single API or action that you're doing in your test to really verify and say, like, OK, if this API breaks, we know this OK, this API is broken. It has more errors and normally, it doesn't interest in change in its performance-wise. Let's create a ticket for that team. Just a Jira defect. You'll look at it and hope the pipeline. The release stops there until that team looks at it and continues to release either with a fix or that they accept the risk of going to production with a slightly slower or slower transaction or application.

[00:23:42] Joe Colantonio Interesting. So, you know, when I think of F, I think bad. But you just said F could be good because it's a big change. But like you said, it's a positive change. So do you do automatic alerting like your vacuum when it was stuck in the rug to say, OK, the check-in worked you get an F and with a high-level analysis automatically? Or do you actually analyze the first before you alert people? Like at what point does a human get involved with this?

[00:24:06] Joey Hendricks Yes sure, it has. The same of my vacuum cleaner is it if it kills herself on the rug and it's standing there and it's trying to drag the rug around the room, it usually takes a minute or two, and then it's going to start complaining and I get a notification on my phone to come to check out what's wrong with it. And then I see it standing there. So then I know, okay, I should have been here. So the same as also with my tests. If it really goes wrong, I want to be there. So I'm usually the first contact that goes out and some other teams and I'd get a notification, Hey, it's wrong. And also, when I look at this from the APG context, we also have teams looking at these numbers. So when they're doing the release, they're looking at these ranks. They're looking at these, we generally use a score so they can also look at the score and see, OK, it's bad. And then this guy makes gets fed up with the command chain and then it kind of ends up and analyzes it and they say, OK, OK, we can accept this. This is OK. The benefits of releasing the software now outweigh the performance problems that we might get. And now we're taking a calculated risk and we are no longer taking an unknown risk because we know exactly what is wrong and we can fix it and production leader, or if the risk is too high, we can just decide not to do it right. Just this whole to release fix it. We work for it on a week longer and then go into a go-to production with it really generally comes down to what your organization wants to do and how they want to tackle it.

[00:25:28] Joe Colantonio You know what? Automation testing is usually a lot of false positives where you're getting alerted, but it's really not an issue. How I guess how stable is this approach? And is there ever a time when someone maybe updates your environment without you knowing that makes a difference? That is negative, but you didn't know that a change was made?

[00:25:48] Joey Hendricks Oh, and it's a very good question. So generally speaking, if your test environment is stable and it detects something right, so if you have a stable test environment, then the change introduced, it will catch that because it just numbers that change. If there is something interesting in the numbers change, it will happen. Do keep in mind, you need a lot of data to do this. You can't do this on a very low throughput test. The same goes for high throughput tests. If you're, if you're like a telco that pushes out a couple of million transactions and they have huge raw data sets yet and those calculations are also going to take longer because they're more data, more numbers to crunch. For me, I'm kind of in the low end when it comes to throughput. So for me, the calculations don't take long at all. I can calculate them fairly quickly. And because that's the point which I'm trying to come to is that I kind of guard my test environment. So nobody touches it. Nobody does anything on it without my permission, so I can safeguard it a bit from weird changes that happen because somebody decided to, you know, let's update all the test data. And now there are different types of know. Some customers are gone, some are there, or the kind of information that these customers have changed, which impacts the performance. For me, for example, because the context of the customer really impacts my performance. Because if I have a database stand from April this year and the customers that I use in my data said in my test, they, for example, become way more complicated than the reaction time goes up because my data has changed. So that's why generally we have rules in place around the expectation environments for performance that do not touch them unless without permission because if you do so, you can impact the test environment. So we create a lot of awareness around that as also something generally that you should do. We treat him a little bit like a more special test environment, which with limited access where people request access to be done to start testing on them if they want to test something against a more complicated test environment than they usually have and it will all runs to me then or and some other people that say, I do the analysis and say, OK, yeah, you can do that. Nobody's using the test environment at that time. You're not in the way of any performance tests. Go at it. And changes, they also have to get approved by then my team to just make sure that it doesn't impact the tests. And if we do that, we generally test it. And usually, all the changes that we do in the environment are going to be done a week or two later in production anyways. So we have to make the many ways and we see what the impact is then.

[00:28:17] Joe Colantonio Great. Also in your session, you have really good-looking graphs. What do you use to create those graphs?

[00:28:22] Joey Hendricks I use Tableau. I think a tableau is. And I think I am the least powerful tableau user and the least sophisticated because what I literally do, I do connect to my database is an oracle or a MySQL database, and I visualize the graphs. This is just raw data stored in there. I visualize them. I get the nice graphs out of it, or I just connect with CSV file and I create the graphs all the graphs that you always see from me there either plotted with Tableau or I use a Plotly for Python. If you have to generate for a GIF, a couple of hundred images, I'm not going to do that by hand in Tableau because there's a lot of work. So I generally use Tableau for graphing if I quickly need to graph something or I need to make dashboards, and I use Plotly for anything that I have to automate it. So if I want to send something to a team in an ultimate it's fashion I to use Plotly or maybe an APM tool like AB Dynamics or Grafana or something like that.

[00:29:17] Joe Colantonio Okay, Joey, before we go, is there one piece of actual advice you can give to someone to help them with their performance testing efforts? And what's the best way to find or contact you?

[00:29:26] Joey Hendricks Yeah, you can always contact me on LinkedIn, shoot me a message, connect with me, and let's talk performance. Generally, all of the stuff that I've spoken about is very well documented on my GitHub. So you can find all the code needed to start doing this yourself. So it's all available in Python. All of the stuff is way better explained on my Readme because it is a very difficult subject. It's not easy, it's not a, it's not an easy talk. It's not easy to understand this in one in one go. It's all in there. If you want to test it out, there are some example data you can run it on so you can see for yourself that it works and works in the way that I describe. I can look back at my topic at the pack for more information, and generally, you can find all of the stuff on my GitHub profile. And if you want to commit and try to make this better if have some, as you know, way more about statistics than me. Please help me out here. Let's, let's make a better project out of this, and let's try to make a one-size solution for all performance engineers that they can just download a Python script, embed that into their pipeline, and start automatic result analysis on their own and use it for their own needs.

[00:30:40] Joe Colantonio Thanks again for your performance testing awesomeness. That's everything we value we've covered in this episode. Head on over to testguid.com/p80 and why they make sure to click on the try them both today link under the exclusive sponsor's section to learn all about SmartBear to awesome performance test tool solutions Load Ninja and Load UI Pro, and if the show has helped you in any way. Why not read and review it on iTunes? Reviews really do matter in the rankings of the show and I read each and every one of them, so that's it for this episode of the Test Guild Performance Site Reliability Podcast. I'm Joe. My mission is to help you succeed with creating end-to-end full-stack performance testing awesomeness. As always, test everything and keep the good. Cheers.

[00:31:28] Outro Thanks for listening to the Test Guild Performance and Site Reliability Podcast. Head on over to testguildcom.kinsta.cloud for full show notes, amazing blog articles, and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.

 

Rate and Review TestGuild Performance Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Bas Dijkstra Testguild Automation Feature

Expert Take on Playwright, and API Testing with Bas Dijkstra

Posted on 04/14/2024

About This Episode: In today's episode, we are excited to feature the incredible ...

Brittany Greenfield TestGuild DevOps Toolchain

AI-Powered Security Orchestration in DevOps with Brittany Greenfield

Posted on 04/10/2024

About this DevOps Toolchain Episode: In today's episode, AI-Powered Security Orchestration in DevOps, ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

First AI software tester, Will You Be Replaced and more TGNS116

Posted on 04/08/2024

About This Episode: Will you be replaced by AI soon? How do you ...