The New Age of Performance Testing

By Test Guild
  • Share:
Join the Guild for FREE
Dylan van Iersel Automated, Agile, the new age of performance testing.

About This Episode:

In this episode, we're diving deep into the world of performance engineering with our esteemed guest, Dylan van Iersel, an experienced IT consultant and co-founder of Perfana. We'll explore the intricate relationship between software performance and business outcomes and how tools like Perfana can democratize and simplify the process of performance testing.

Performance is more than just a technical concern; it has direct implications for customer satisfaction and the bottom line. Dylan illuminates the importance of integrating performance testing within the CI/CD pipeline, using Perfana to serve as a quality gate and provide actionable insights through automated analysis and dashboard visualizations.

We'll also discuss the evolution of performance engineering in a cloud-native, containerized landscape, the challenges of scaling performance testing across agile teams, and why the “shift left” approach in identifying issues early is crucial for today's development processes.

For teams looking to embrace performance testing, Dylan introduces Perfana's starter package and emphasizes the ease of getting up and running, even on a local laptop, as a foundation for more extensive integration into test environments and CI/CD pipelines.

For our listeners interested in cutting-edge developments, we dive into how Perfana innovates with data science and machine learning to enhance anomaly detection and root cause analysis. Plus, we'll get into the nitty-gritty of why observability, while important, shouldn't be your sole resource for performance testing.

Listen to discover actionable advice and insights on improving your team's performance engineering efforts.

Check Out Perfana free trial now

About Dylan van Iersel

Dylan van Iersel

Dylan van Iersel is an experienced IT consultant with nearly 25 years in the field, specializing
in agile consulting, architecture consulting, general IT, and project management. A pioneer in
implementing Scrum and agile practices, he has led initiatives in continuous integration,
deployment, and automated QA since the early 2000s. Dylan`s focus on automated testing
and equipping teams with the most effective tools and methods naturally culminated in his
co-founding of Perfana in 2019, a Continuous Performance Engineering solution.

Connect with Dylan van Iersel

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:20] Joe Colantonio Hey, it's Joe, and welcome to another episode of The Test Guild Automation podcast. And today, you're in for a special treat because I'm going to speak with Dylan all about what I think is a super important topic, and that is next-level performance engineering and why it's so important. You don't want to miss this episode. If you don't know, Dylan is an experienced IT consultant with nearly 25 years in the field specializing in agile consulting, architecture consulting, general IT, and project management. He's a pioneer in implementing scrum and agile practices, and he's led initiatives in continuous integration, deployment, and automated QA since early 2000. So he really knows this stuff. Dylan also focuses on automated testing and equipping teams with the most effective tools and methods that really come together. And him co-founding of Perfana, which seems like a really cool tech that you want to know about. He founded it in 2019. That's a continuous performance engineering solution. If you don't know what that is, if you don't know why you need a solution like a continuous performance engineering solution, you want to stay all the way to the end. You don't want to miss it. Check it out.

[00:01:27] Joe Colantonio Are you ready to elevate your software's performance? Before we jump in today's awesome conversation with Dylan, let's take a moment to spotlight Perfana which say trailblazing tool and performance engineering. If you've ever found performance testing to be a daunting task, you're not alone. Believe me, I start off as a performance engineer over 25 years ago and it was really difficult. But now tools like Perfana are here to change the game by automating the whole process and they integrate seamlessly into your CI/CD pipeline and serve as your quality gatekeepers, as dashboards, detailed traces, and continuous profiling. Perfana streamlined your testing, ensures that you're always in the know with automated result analysis, it distills complex data into actual performance insights. Well, being a data science expert. So as we gear up our discussion with Dylan, one of the co-founders of Perfana, consider the impacts of proficient performance testing on your daily releases. Make your application more performant now, check out the free trial offer for Test Guild listeners at testguild.com/perftrial in the link in the comment down below.

[00:02:34] Joe Colantonio Hey Dylan, welcome to the Guild.

[00:02:39] Dylan van Iersel Hey, Joe. Nice to be here. Thank you.

[00:02:41] Joe Colantonio Awesome to have you. I guess before we get into it. Is there anything in your bio I may have missed that you want The Guild to know more about?

[00:02:47] Dylan van Iersel No, I don't think so. I think you pretty much covered it all. Yeah. Like you said, I'm very much focused always on, developer productivity and providing developers with the right tools to be able to focus on the good stuff instead of the mundane and repetitive stuff, so to speak.

[00:03:06] Joe Colantonio All right. So I guess that leads me to is that one of the reasons I always love speaking to founders on why they invested all their time and energy into a solution. You selected performance engineering. So I guess, why performance engineering?

[00:03:19] Dylan van Iersel Well, it stems from one of my earlier jobs that I did where we used to have like really big performance issues. And also we were developing software with a large number of teams. So you can imagine that testing performance in such a large-scale operation is pretty intense. And then I came across Daniel, who's my co-founder, and he has developed a solution exactly for this problem, which I thought was really super interesting. And it worked well. So we decided to turn it to a commercial.

[00:03:49] Joe Colantonio Great. Yeah, I love to dive in. I guess the first thing we should probably tackle is, that there are a lot of times where performance testing, for some reason, terminology seems to throw people off. I think you mentioned a lot of things that I see online around performance engineering. I'm just curious to get your take on what you see as performance engineering versus performance testing.

[00:04:09] Dylan van Iersel Yeah, that's a good question. And that's exactly what we have been ourselves struggling with quite a bit because initially, we positioned ourselves as a performing testing software company. But actually, it's a bit like you can test your system for performance and just say you passed the check marks. Does it actually handle the required load? We think that leaves a lot on the table to be further investigated and to be further optimized, which has to do with what are the metrics that the actual system under test is experiencing. So while you're doing your load test. So imagine like a real-life situation because in the end, we make software for production for live customers. And your application can be performing really well. I mean, all your customers are, of course, delighted with the software. However, the way that you are able to achieve that is by scaling your way of problems. So that means that there's still an issue or multiple issues in your software, even though you are actually able to manage those issues by just simply scaling out. So that's the difference. So the difference between performance testing and performance engineering is in the first, you personally I say you just simply check does the system handles the load. Does it not throw like hundreds of errors? Does it keep running? Does it stay alive? And the second one is actually diving deep into the system and further optimizing resource usage, scalability and stability issues, resilience that kind of stuff.

[00:05:49] Joe Colantonio Absolutely. So I guess, when you talk about scaling, I guess nowadays people say, oh, we'll just scale. We have a containerized environment, we'll just scale up more and more infrastructure to handle. So we'll have no, no performance impact whatsoever. So obviously, I know that's not true. You probably know that's not true. So can you tell a little bit about how can you scale performance testing and why does it matter for optimization, especially as we live now in a cloud-native containerized world?

[00:06:13] Dylan van Iersel Well, it's not so much. Well, so there are two aspects to this. So one is scaling your way out of issues, but the other one is scaling the performance testing. So if you have like a modern, let's say e-commerce setup with multiple in modern architectures, there are multiple microservices, databases, frontends, you name it mobile applications. There are a lot of components that contribute to the overall, let's say user experience and also to the cost. The traditional way of doing performance tests is we're released once a year. So we do performance tests once a year. The modern way is of course. Yeah. But we don't release once a year, my microservice releases two times a week. And the microservice of the team next to me is also releasing two times a week, and the front end is releasing like three times a month, and so on. So in that, let's say I can't think of the word, but that high-speed environment, that this agile environment, performance testing really becomes an issue. It's really you can't test all those systems together at once anymore. You need to do isolated regression testing. And so it becomes actually so the testing itself becomes an issue of scale. How do you test applications like dozens of applications or hundreds of applications across tens of teams or hundreds of teams and still manage to go to production, like every day or every week or so? That's why we think this is all about automation. This is all about automating, not just executing the test, because that is, to be honest, rather easy. But the real work is actually in collecting and analyzing the data that comes from that load test. If you execute a load test or performance test like for across this ecosystem, like ten every day, then collecting the data and analyzing the data becomes a real, real hassle.

[00:08:20] Joe Colantonio All right. So is that what you mean by next level then? I know a lot of you talk about next I'll take it to the next level. How do you even-it's so hard for me, I always said writing a performance test is probably the easiest thing, but now we have all these other components and all these other dependencies. How do you even know how to create a performance test? It's the first thing. And the second thing is the results were always hard to tell to correlate. Oh, this is because of this and that. So if you're multiple agile teams and they don't necessarily know what the other team is working on, but these actually have dependencies on one another, how does that all work when it comes to performance? Doesn't it make it even harder to do?

[00:08:54] Dylan van Iersel In a way, yes, of course, and in another way, this is actually the reason we promote heavily promote I still believe heavily in shift left. So you want to get the feedback as early as possible. You want to know right away if you've introduced a performance issue. So in order to do that for just your team and just your, let's say, microservice, you have to, of course, design your test in a way that it's kind of isolated. And this also allows you to do it repeatedly over and over again. Automation and small scale allow you to do it repeatedly, fast feedback, etc.. There comes of course a point where they need to. So the way to do it is, for example, to set up stops or mocking so that you can simulate the external components, external to your microservice. And there is of course another stage, if you will, where you combine a number of those microservices, you introduce, let's say, the database into it. So it becomes a mix of a number of components. And you test those together as well, and you collect all that data and you put SLOs on those, and you just simply validate the results and look for ways to optimize software.

[00:10:06] Joe Colantonio Okay. So who's responsible for this? You have to have like a data scientist that actually looks at all this data and then knows okay, this because this and that and make a hypothesis to see if it's true. How does that all work?

[00:10:17] Dylan van Iersel That's actually what we do for the teams because we also believe. So developer experience is a really big topic. So in order for developers to be productive on the topic that they like to be productive on, which is creative code, development of code, you don't have the inclination or the desire to learn data science. So that's why actually you need automation. You need smart solutions that know how to collect that data, analyze the data, and draw conclusions from the data so that the teams themselves, and possibly the occasional performance engineer that is supporting those teams, actually know just work from the conclusions forward. No data science is involved. That's all in software. All right. We believe that should be in the software. What Perfana does, for example, we provide common solutions for common stacks with known, let's say, patterns for the data that indicate or point towards possible errors or discrepancies, and our analysis software just executes the analysis and gives that feedback.

[00:11:33] Joe Colantonio All right. So I assume a lot of teams you work with already have a lot of performance tests, but they're probably not very structured. So how does Perfana come in? Is it Perfana like a wrapper on top of it that gives them a framework that they can then implement and point to and then perform and then analyze all the different inputs and create outputs with some insights based on it?

[00:11:54] Dylan van Iersel Yeah, that is pretty much it. We work with a number of open-source load test tools. So you can write your testing JMeter, Gatling or K6. That's all fine. And when you execute the test, when you send the metrics from both the system under test as well as the load test for itself, if you send that to, for example, Influx or Prometheus, then Perfana picks it up and analyzes it automatically. All this required is to, in your, let's say, when you start executing the script and when you run the system under test that there are some tags, put on it on the metrics data so that we can collect and automatically create dashboards in the analysis for you.

[00:12:39] Joe Colantonio How do you typically see people pushing performance testing left, like do you see them doing like 10,000 users? Is it more just like, how does it performs when it's interacting with this other feature? Or is it like how does that work? Like how do you get in order for people to be successful with performance, like you mentioned, you need a shift to the left. But a lot of people have problems even doing performance testing. How do you recommend the shift to the left so that they could benefit from a tool like Perfana? Do you have any good practices that you seen to make it really work well for development teams?

[00:13:09] Dylan van Iersel Oh, that's really a good question. The way that we usually work is that we have an onboarding program which works really well. When a team has a desire to start load testing, there's little effort involved in setting the SLOs or writing the test script and so on, integrating it into the CI/CD pipeline. And that's basically it. From then onwards, it's executed well whenever you want on every commit or every merge to a certain branch or whatever. And then that load test is executed, which drives the feedback.

[00:13:45] Joe Colantonio Maybe a weird question. Do the teams know they need a solution? Or is it usually a tester that tells the development teams, hey, you know the application needs to be performant? Probably need these tools in place because especially as we go towards microservices, how does that work?

[00:13:59] Dylan van Iersel That difference of course greatly. We've seen all kinds. It's not uncommon that people experience some drastic performance issues in production and then start looking for solutions, then start actually to become aware of the importance of performance or become aware that they have a problem. So that's one end. It's also very common that the more seasoned developers are very much already performance-aware. Then still very often it's considered very labor intensive, so difficult to set up in the regular sense, difficult to maintain it in the regular sense. It's usually driven either from the top down or by the teams themselves, depending on what kind of experience there is in those teams.

[00:14:48] Joe Colantonio All right. So you're probably sick of hearing this. So-.

[00:14:54] Dylan van Iersel AI. AI. AI. AI.

[00:14:54] Joe Colantonio This is one area where I always said this makes sense to have AI, specifically machine learning, because you have all this data. And I would think that's what machine learning was built for. So is that something Perfana embraces or something that they, you all use or utilize?

[00:15:10] Dylan van Iersel Yeah, we're definitely looking into it. At the moment, we're experimenting with now in these days, you would say, there are traditional data science methods. So we're experimenting with data science methods to automatically detect anomalies in the data because right now what people need to do is they need to determine their service level objectives for the software themselves. And if you don't know what you're looking for, then there's also nothing that is reported to you, right? So the easy way to get started is to set SLOs on CPU usage or the number of requests or garbage collection durations or things like that, but you might be missing a whole lot of signals that are deviating from the norm. And those you might be missing those, and those might indicate an issue. We're developing a feature that automatically tunes detects these patterns and automatically detects large deviations from those patterns. That's more traditional data science approach. So next step is, of course, to indeed employ more like machine learning approaches to actually pattern recognition which might drive root cause analysis.

[00:16:24] Joe Colantonio All right. So I also been reading a lot of Perfana and I love I came across this probably in a news article. I said oh this seems like a cool solution. And I love how you have written everywhere. Software performance equals business performance. And I always find that a lot of times it's hard for developers of teams that I've spoken with to see the value of performance or be able to explain to business why it's important. So can you explain a little bit more what you mean? Because I think this is really a really good killer analogy. What is software performance equal to business performance?

[00:16:55] Dylan van Iersel Yeah. Well, I'm always also a little bit surprised that for me it's very evident and also something that I've known for a long time. But you know how, for example, Amazon and Google have done research on the value of performance. So one big conclusion from that is that the performance, better performance actually directly impacts customer satisfaction and conversion. And that's of course very much let's say everybody gets that. And everybody also gets I think that having very bad performance, nobody wants to go there. But having really bad performance means your site might not function at all. I think there was an incident with some ticket sales a couple of months back. It completely burned down the site. That's a bit of a, that was a bit of a shocker. But there's a third aspect, and that's what I touched on earlier a little bit, is that performance also directly impacts the cost that you're making of keeping those systems running. If your software is not performing adequately, you might be using two times the amount of resources. That's actually that would have been necessary if you had optimized the software. So that's to avoid unexpected high charges or unexpected low customer satisfaction. Performance is very, very important.

[00:18:19] Joe Colantonio I think another barrier, maybe it's not the same because you mentioned a lot of developers you speak to our performant aware is that a lot of people, when they hear performance testing, they think, oh, it's like a dark arts. Like they have no idea where to start. This is too much effort. I don't know how to do it. And it sounds like to me like a solution, like Perfana would make it almost more accessible for people. Is that something you would agree with?

[00:18:41] Dylan van Iersel Of course. Yeah, absolutely. Because like I said, automation is the key and it encodes literally the knowledge of a performance engineer, how to deal with the metrics coming from both the system under test as well as a load test. So that's a whole lot of knowledge and experience and effort that is put into software. And that makes allows the developers and the teams to just focus on developing software. Of course, if you have to be somewhat performance aware while developing it, and also if you by error introduce some performance defect. Yeah, of course, have to go back to coding it, debugging the software, eliminating that performance bottleneck. But the system is in place to continuously guard you from introducing these very costly mistakes, potentially costly mistakes into production software.

[00:19:38] Joe Colantonio How does it guard? Does it bubble up best practices or make suggestions to help you know how to do what you're trying to implement? Or if is something you're doing that's kind of wonky, does it say, hey, this doesn't look right?

[00:19:51] Dylan van Iersel One of the key aspects, of course, when you integrate it into your CI/CD pipeline, of course, you can use it as a quality gate, right? So the software tests the performance. If the performance has a large deviation from an earlier run or from a well-known baseline run that you've put in production, let's say three days ago, then the software is flagged, the build is broken. You need to go back and look and dive into the details. So that's what Perfana then provides you and provides you with the dashboards and the details of that particular test run as compared to the baseline and the previous run. And it also allows you to, for example, dive into the traces and the continuous profiling that is recorded during the performance test. So it gives you access immediate access to all that data that is collected during a performance test, how it compares to the previous run, how it's compared to the baseline run, and it really allows you to quickly dive deep into what the potential issue is.

[00:20:54] Joe Colantonio You're not just throwing up a dashboard with a bunch of information. It also sounds like you keep mentioning the automated test run result analysis. So it sounds like it automatically does the analysis for you as well. Someone logs into, they automatically see maybe some key things to start looking. Into to make it easier to debug.

[00:21:11] Dylan van Iersel Well, in what lets say the upcoming feature. Any metric that has a deviation from some kind of baseline that was automatically created is flagged. But up until now, currently, you are responsible for setting the service level objectives which we can help with. So we have golden path profiles. Let's say paved road profiles. If you have a spring boot stack for example with PostgreSQL database, then we provide some custom we ready to go dashboards ready to go SLOs. And the system will detect deviations on those as follows. And then in the near future we go a lot further than that. But that's not yet.

[00:21:55] Joe Colantonio So it sounds like when someone installs it, then if they say you have some common templates, maybe that of those standards, and once you have that, it's like, okay, here are the SLOs we find are going to be more beneficial for you. So it sets it up automatically for you in that way?

[00:22:08] Dylan van Iersel Yeah, exactly. Yeah.

[00:22:10] Joe Colantonio Nice. So I don't know if you how much you can reveal. You've revealed a little bit so far. What else? Is there anything else on your roadmap or where do you see the future of performance engineering and how you could see the needs of what are the needs are going to need to be filled, especially as we develop quicker and faster, more often even than what we're doing now?

[00:22:28] Dylan van Iersel Yeah, well, I think so. There's of course, let's say the AI route. So where we came into existence to support development teams as much as we can by automating everything that you need to know about performance testing and performance engineering. And there are still lots and lots of steps that you can take there. So improvements in root cause analysis using AI, for example, to explain, let's say, the situations that you have uncovered. And also another big piece that's also so adoption by the teams. So you can make performance engineering as easy as you can. But indeed if no one in the entire company is actually performance testing, there's nothing to check. There's nothing to analyze. So adoption needs to be also facilitated, which also means that you have to have more insights on how you as a team are performing, how you are performing against other teams, how the company as a whole is performing. So it's performance and performance?

[00:23:37] Joe Colantonio Yes. Yeah. I wonder if you can not pit teams against teams. We can see which one is performing better than another. Insights like this one's always checking in code and performance a little off. Maybe we need to give more training. Like not to penalize them, but just to see opportunities, let's say.

[00:23:53] Dylan van Iersel Oh some badges. An indication for example. Yeah.

[00:23:57] Joe Colantonio Right. Right.

[00:23:58] Dylan van Iersel Those are absolutely topics that are on the roadmap.

[00:24:01] Joe Colantonio Oh very cool. Love that. All right. I always recommend people get their hands dirty. And I think performance testing is needed. And I think a solution like this is very helpful. Do you have like a free trial someone can go to and actually just implement and say, hey, let's check this out and see how it works for us?

[00:24:17] Dylan van Iersel Absolutely. We have a Perfana starter that allows you to get up and running in like 15 minutes. And it's for just a simple load test on an example system under test. But it allows you to see how it works, how it integrates, and what it can do for you.

[00:24:34] Joe Colantonio Love it. And I'll have a link to that in the show notes as well when I have to recommend everyone check it out, Perfana. And we will have a link for it. It's Perfana.io, but I'll have a special link right to the trial. I highly recommend everyone checks it out.

[00:24:48] Dylan van Iersel Great.

[00:24:49] Joe Colantonio All right, Dylan. I know one thing I've been seeing a lot about and I actually wrote about it in my 2024 trends. Is observability, how developers need to start thinking about observability from the beginning. They need to bake observability into their systems. And that's one way you can make things more performant, because you have open telemetry and all these solutions, then that can help you with the tracing and everything. Are you seeing the same thing? Do you have any thoughts on observability? Is this something important that you see as a trend in 2024?

[00:25:15] Dylan van Iersel Yeah. Absolutely. Absolutely. It's maybe good to know if it wasn't clear already, we built on observability tools. So one way to do. I mean if you want to do a solid performance test and especially performance engineering, you need proper observability tools to get the data and to be able to analyze the data. So Perfana really builds on top of that. So what I also do see is people and companies relying more and more on observability alone. For me, that's a bit like Max Verstappen, the famous Dutch race driver. Going out on the track without any testing and just expecting to win the race without doing, let's say, track testing and testing of the car. So you can have all the dials and metrics and whatever in place in production, and you can have the fastest response time to those metrics. But if the so-called dirt hits the fan, you're always lagging. So especially in the context, for example, of SRE and observability, I still think as good as those tools are and as needed as those observability platforms are, we rely on it a lot. It's not enough to rely on it alone because prevention still in the end, I think is better than curing.

[00:26:41] Joe Colantonio This is a good point because a lot of people have set up, I don't have to worry about performance. I just use observability. I test in production. Why? Why even spend time and effort at it? And it sounds like this would.

[00:26:52] Dylan van Iersel That's what I hear. Yeah, well, not a lot, but in some cases and yeah, like I said, prevention is still better than curing. Curing is very costly.

[00:27:04] Joe Colantonio Right. For sure. And when we say costly it's like it's a dollar value, right? Because if you have bad performance in production if you find it, great. But I mean, you've already open yourself up for lost revenue, downtime, probably bad reputation, all the things that. You caught it, but good for you. But dang, you really hurt yourself waiting to the end, right?

[00:27:25] Dylan van Iersel Yeah. I would say it's a risk that you're taking, consciously.

[00:27:29] Joe Colantonio Great stuff. Okay, Dylan, before we go, is there one piece of actionable advice you can give to someone to help them with their performance engineering efforts? And what's the best way to find contact you or learn more about Perfana?

[00:27:41] Dylan van Iersel Yeah. Okay. So. Well, the best way I think, to get started with performance engineering efforts is first, of course, to just realize that you need it. And then actually I think it's not so difficult to get started. It looks maybe very daunting, which is why we have set up the Perfana starter. Simply create an account, download the starter package and you're off to the races within 15 minutes and it will show you, it is a starting point for you to experiment with. So initially, you just run it locally on your laptop. You have a local system on the test and a local test to test it, which of course doesn't make sense at all because in a real-life situation, wouldn't one use the same hardware to actually generate the load and run the system on a test? But it's a nice starting point. The next step is to use that same test, but run the system under test somewhere else. Next step is to integrate in the CI/CD. It's step by step should be fairly easy.

[00:28:44] Joe Colantonio Awesome. And the best way to find or contact you or learn more about Perfana?

[00:28:49] Dylan van Iersel Well, like you said, you're putting a link in the in show notes. Other than that Perfana.io, you can also look me up on LinkedIn. Reach out.

[00:28:59] Joe Colantonio Awesome stuff. Thank you, Dylan for this awesome resource and highly recommend everyone check out that free trial and check it out for themselves.

[00:29:05] Dylan van Iersel Thank you!

[00:29:06] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a488. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:29:40] Hey, thanks again for listening. If you're not already part of our awesome community of 40,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Promotional graphic for a TestGuild podcast episode titled "The Future of DevOps: AI-Driven Testing" featuring Fitz Nowlan and Todd McNeal, supported by SmartBear.

The Future of DevOps: AI-Driven Testing with Fitz Nowlan and Todd McNeal

Posted on 04/24/2024

About this DevOps Toolchain Episode: In this DevOps Toolchain episode, we explore the ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Copilot for Testers, GPT-4 Security Testing and More TGNS117

Posted on 04/22/2024

About This Episode: Have you seen the new tool being called the  coPilot ...

Jon Robinson Testguild Automation Feature Guest

The Four Phases of Automation Testing Mastery with Jon Robinson

Posted on 04/21/2024

About This Episode: Welcome to the TestGuild Automation Podcast! I'm your host, Joe ...