Automated Quality Gates in Performance Testing with Roman Ferstl

By Test Guild
  • Share:
Join the Guild for FREE
Roman Ferstl TestGuild PerformanceFeature

About This Episode:

How do you genuinely integrate your performance testing efforts into your DevOps lifecycle? In this episode, Roman Ferstl will share his experience with creating automated quality gates in performance testing. Discover how to automate time-consuming and repetitive performance engineering tasks. Listen up to find out which tools and techniques to use to get started on your performance testing CI/CD efforts.

TestGuild Performance Exclusive Sponsor

SmartBear is dedicated to helping you release great software, faster, so they made two great tools. Automate your UI performance testing with LoadNinja and ensure your API performance with LoadUI Pro. Try them both today.

About Roman Ferstl

Roman Ferstl Headshot

Roman is a former software developer with a master’s degree in Astrophysics from the University of Vienna and he has always felt attracted to complex problems.

After gaining several years of experience in software engineering, he founded Triscon in 2018, a company that is fully dedicated to all topics regarding performance engineering. Triscon is in Vienna and provides services across sectors for regional as well as international clients. In addition to performance testing, their services also include application performance management (APM), test automation and software engineering, as well as workshops and trainings.

If you need help with performance engineering or just want to discuss some topics, feel free to reach out to Roman anytime. He and his team are always happy to chat about performance.

Learn more about the free PAC performance events

Since its beginning, the Neotys Performance Advisory Council aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members about several topics on the minds of today’s performance tester. Some of the topics addressed during these councils and virtual summits are DevOps, Shift Left/Right, Test Automation, Blockchain, and Artificial Intelligence.

Connect with Roman Ferstl

  • Full Transcript
  • Joe [00:01:58] Hey, Roman! Welcome to the Guild.Roman [00:02:03] Hi Joe. First of all, let me thank you for inviting me to this great talk. I would also like to start with an invitation, actually. If you are anyone else who is listening ever comes to Austin and wants to talk about performance. Feel free to contact me. You are very much welcome. Come by to our office in Vienna for coffee. Have a little chat about performance testing. We always love to talk about this topic. Just Google Triscon, spell's T-R-I-S-C-O-N and you'll find us.Joe [00:02:34] Awesome. I actually saw a video. You're in a beautiful building in Vienna. Due to covid-19. Are you still around like if I flew there today, will I still might be able to have coffee with you?Roman [00:02:43] Not right now, since we are in the second lockdown phase, but apart from that, usually, yes, then we're there. Right now it's home office and it's called Vienna's Millennium Tower. As you mentioned, it's a nice building. We own the twenty-third floor. So you got a nice view of Vienna there as well.Roman [00:03:03] Nice. So there are two pivots in your career that I like to dive into before we get into the meat of the topic.

    Roman [00:03:08] Sure.

    Joe [00:03:08] The first one is I think you'll like the fourth person I've interviewed that has an advanced degree in astrophysics. So…

    Roman [00:03:14] Really?

    Joe [00:03:15] Yeah. What's the deal with that? Like, are there any things you take away from astrophysics that make people want to be software engineers or vice versa?

    Joe [00:03:25] It's really astonishing that I'm already the fourth person with this kind of background because there is not so many people out there. But anyway, actually there is. I've had to do with the topic of performance engineering during my studies, as I was working with the science team on Space Project. So during my Masters, I was part of a team and we developed the onboard software for space telescopes. And I was doing the algorithms, so-called centroid algorithms, which are responsible for the telescope, not losing the star once it's observed a star in space. So there is a link, but it's actually coincident. Didn't lead me directly to the work that I'm doing right now. But there is a lot of software engineering going on in astrophysics as well. Numerical simulations and stuff.

    Joe [00:04:23] Yeah, I don't know if this has anything to do. I know there's nothing to do with mining of asteroids. And so I'm sure there are different types of software that need to be created in order to do that. So that sounds like more fun to me than performance engineering. So what made you then make the switch over from software engineering to performance engineering?

    Roman [00:04:42] Another good question. Actually, I was always present in both worlds, the scientific world as well as the business world, since I was working as a software engineer part-time during my studies to finance it. And I just enjoyed the business world more in terms of the entire collaboration and communication aspects. It's something that I decided based on all the exciting projects that I had going on besides my studies there at the time.

    Joe [00:05:20] Nice. So, were you on a particular project to get you involved as a software engineer with performance, you said, oh, wait a minute, here's an area I never thought about that I want to learn more about?

    Roman [00:05:30] Actually, I was always attracted to new kinds of technologies and probably, if I would really be honest, not only about technologies and challenges that come with it, it's a fast living world in IT. I do enjoy that really much. It's a lot of change going on there. Usually, compared to the scientific world, life is a little bit more slow-paced I'd say. Since you work on one subject or one topic for, let's say, at least a year or two. And if you do a PhD, you focus on one thing for like three or four years at one time. And I found it more exciting to learn a lot of new stuff fast. And what I personally and it's really a lot was working together with my colleagues at this time during my work as a software engineer.

    Joe [00:06:23] So you start a company and it seems to be mainly about performance engineering, so do you feel you must speak with a lot of customers or see a lot of people developing software in different situation, do you see that they still don't know that they need a need for performance engineering help, or do you think they see a need but they just don't have the resources that know how to do it?

    Roman [00:06:45] I personally think that there is a lot of need that is unseen currently in terms of performance engineering. We obviously, of course, we do have customers who see the need and we do a lot of work together with them. But I think there is still a huge amount who is not aware of how important this topic is. And with the whole SRE, the site's reliability engineering coming up, which is, in my opinion, a really hot topic right now, I think also the entire performance engineering will thrive and evolve in the near future.

    Joe [00:07:22] Absolutely, like I said I notice a lot of companies may not be aware of exactly what is needed, but they know there's some sort of need in the performance arena. So you did mention SRE seems to be a hot topic the past five years or so for three years or whatever. So do you see that people are getting confused then what SRE is compared to traditional performance testing, or they basically know the merging of two different disciplines into one?

    Roman [00:07:50] I wouldn't say. I think one…I think they both fit in together quite well. The traditional performance testing has to change a little bit to fit in the new SRE approaches and to the fast living DevOps world, I would say. But in principle, it's a synergy. For example, if I mean, it's all about site reliability, if you want to make sure that you can rely on your IT services, obviously you have to do performance tests as there are some bugs and problems and stability issues that you only can uncover in performance testing. So, yes, I think it's really important to combine these things and that performance testing should actually be part of and be mandatory as part of SRE.

    Joe [00:08:45] Definitely. You know, I notice also with performance testing, a lot of times people still think it's like an activity that's done after the software has developed and you're just throwing a huge load on the system to make sure it's able to handle it. But I believe your session recently with Neotys on actually integrating automation or performance skips into automation quality gates is kind of, to me, still an advanced topic. So before we dive into that, what would you say are some of the benefits of automating your performance test in a way that you can integrate it more into your performance engineering type of CI/CD pipelines?

    Roman [00:09:21] Yes, so I think that's what actually I meant by saying that the performance engineering and performance testing field has to adapt to the current times. And by saying that, I actually meant exactly that there is the need for a lot of more automation in the entire process to avoid having performance tests just after everything is finished. And then before to go live you do a big load test and it's probably too late to react to the results appropriately. So, yes, it is a clear need to integrate this process more into early development.

    Joe [00:10:06] So, I mean, I think a lot of times what can people think they have to wait till they even get to CI/CD, and sometimes people even pushing it more left than that. Are there any tools, techniques developers can do to be more aware of performance, especially when they're making these like single page type applications where it really is like front end performance that is causing an issue, not necessarily a large load on the back end causing the issue. ?

    Roman [00:10:30] So if you're a front-end developer, there's a couple of tools out there, for instance, Sonar or something like that, just Google some ideas that are really quite a big toolset out there to evaluate your load times, page load times for front-end tests. But what's really important is to cover the entire thing if you want to, also to back end service, API, testing, etc. There's really a bunch of tools and it really doesn't matter what you take, just get started with one tool, no matter if it's front end or if it's API. Use Soap UI. Use Jmeter. Use NeoLoad. Use anything that looks good for you. But I think the important thing is that you are aware that you should do it, that you should care about performance. And I actually feel the developers when they are not doing it since probably they are in a hurry sprinting from one release to the other. And to cover something that might be totally new, even to performance testing with all the concepts that come with it, because there is no single performance test where you have different test types, different test goals, you need to consider how to set up a scenario and stuff like that. So a lot of time is actually needed to understand what to do, to do it right. And I think that is probably one obstacle. That's why performance testing isn't as widespread as it should be.

    Joe [00:12:08] Well, I agree with you 100 percent, especially when you start talking about integrating things into the pipeline where developers are checking code, and then you tell them oh, by the way, we're going to add some performance scripts, they're going to run. They're probably stuck and freaked out because they're going to slow down usually how long it takes for that build to be integrated. So before we get into that, then, when you talk about performance testing and a quality pipeline, what exactly does that mean? Or into a quality gate?

    Roman [00:12:33] What does it mean? Automated quality gate? It actually means, to sum it up in one sentence, to have direct feedback in a pipeline or not necessarily in a pipeline of your test to automatically evaluate a performance test and decide whether it's good enough for an application to proceed to the next stage. So the Gate Approach simply means that if one application is transitioning from a development stage, so to say, to the next level, the testing stage, there might be a quality gate in between in this transition, preventing the application from moving from one stage or progressing from one stage to the other already as there may be performance issues that should be fixed beforehand. And what's necessary here is the automation of a lot of things. If you want to automate performance tests, the automation of the quality gate, this entire approach is consisting of a lot of automation processes. To get there first, you have to automate the test design and automate the test execution as well and then automate the test analysis. So if you think about it, you have a performance test, you have to do your scripts, you have to do to maintain your scripts. Last year during the Neotys ramp up, I talked about how to automate the test design, the entire correlation stuff that you have to do for performance tests, which makes it really intense and time-consuming in terms of maintenance. And you can automate this, for instance, by automating your functional tests, such as tests you have in Selenium or Tosca anywhere. There are nice integrations we often use the tool NeoLoad since it offers such nice integrations that obviously this is something that is not so easy. Automating the execution is easy. You just have to trigger your test somewhere. It can be from a pipeline web interface. It can be from anywhere and then automating the test analysis to really evaluate. Is it good enough or not? That's a really complex step. And this is, so to say, the final step. And altogether, if you end up having a score because this is the goal, you want to make a decision based on all the metrics that are relevant for a load test. Those end up in a final score and you can determine if it's good or not to proceed to the next stage automatically. But why is it so complex? It's actually simply so much different compared to traditional functional tests or unit tests because if you do test automation on browser level for functional tests, you have simply a pass or fail scenario even it can work or it can't work. But if you do a performance test, there is a lot going on. You have multiple tests executed at the same time from probably hundreds of virtual users. So it's actually not one single test run. It's hundreds of thousands. So you get a lot of data. And to evaluate this, you have to apply statistics. This is one thing and one challenge. And the other is that you have a whole bunch of metrics today. Interesting. So why is that? Because in performance testing, you care about stability. You care about concurrency error rates. You simply don't want to break your site. And next thing is, of course, you care about the performance, about the speed, and you also care and this is the third thing, the resource about the resource utilization. You want to know how many resources am I using from so to say from my Red Hat OpenShift cluster, something that is are also related to cost. And all this should be covered in performance tests. And for all these things, there are different kinds of metrics available and you need a way to automatically evaluate those and combine them into a final decision. And this is something that performance engineers are doing manually for years. And there's a good reason for this because if you start to automate this, of course, there's also coming risk with it. If you are not considering all the different things and statistics that might be necessary to make a good decision.

    Joe [00:17:18] Then once again, I totally agree. I think that another challenge is getting the environment. So when you do scale up your stress test with your load test, then you have the resources to do that. So I guess what are the tools and pieces they use to create an automated quality gate in performance testing? I think one of them was Keptn, but before we get into that, like, are there certain tools you use across the different stages and what like, well, what does it look like?

    Roman [00:17:44] Yes, so basically, you need three components. So one is you need a tool for executing the load, most of the cases we use NeoLoad. We also showed this at the Neotys PAC event. And with NeoLoad, you need an APM tool together with your metrics usually. We are using Dynatrace for this and you need a third component that is responsible for evaluating all these metrics that I mentioned before and before we use Keptn. So it's a combination of, for instance, NeoLoad, Dynatrace, and Keptn, but it can be a combination of any tool that stresses load or generates load, an APM tool, and a tool for automatic evaluation of your metrics. So what we did is we had a demo application. It's called The Trace Contesting Turf. We use it actually for recruiting challenges. There are bugs in there that can be triggered if you do a simple load test. And if you're able to do that, you actually pass to the next stage in our recruiting. So we did this at the PAC event. We performed a little test with NeoLoad to the Trace Contesting Turf. It's a simple application. You can do registrations there. Just a registration form that you fill out and submit and you will end up seeing a lot of users in a list. So we generate this load. The entire application is constantly being monitored by Dynatrace, meaning that we see on all service levels front end as well as back end the as well as database, we see how many resources, how much resources are consumed. We see the individual performance in terms of response times, as well as error rates on each individual component. And if you have a highly reproducible test by simply making a smart test design, you can do the same test over and over again to do some baselining. So you will be very aware of anything, any little metric that changes from one release to another. And the third component, Keptn, has to gather just metrics and it can gather it from any APM tool and Keptn's open-source project. You can check it out online and it can gather metrics from so-called SLI-providers. So SLIs, it's a terminology that comes from Google SRE. So we're going to talk about SLIs SLOs. Let me just wait for it for a minute to finish this. So to sum it up, what we showed is that you can stress an application, for instance, with NeoLoad, have your application being monitored by an APM tool by Dynatrace. This is happening anyway and if you have it. And then after the test, you send an event to Keptn to tell Keptn when the load test was performed and it will automatically evaluate a score based on these metrics. So where do these metrics come from? I already mentioned the whole SRE topic a couple of times. So basically, if you're a performance engineer this is not new to you, you have evaluated your load test previously, maybe using performance counters that you gather manually or you already have an APM tool and built your own load test dashboards. Since there are so many metrics that you may want to consider, it's really hard to tell any software that evaluates it manually what metrics you want to have and what are the goals. So we thought there must be an easier approach to this, otherwise no one's going to do it. And we had a workshop together with Andreas Grabner, who was one of the main contributors to Keptn. And the idea was that you simply gather these metrics automatically from your load test dashboard because there are high chances that you will build it anyway to analyze your test. And maybe it's already even there so you can simply reuse it. And in this dashboard where your metrics are plotted in your nice graphs, etc., you simply add a tiny additional piece of information of what's the goal on these metrics in a way that Keptn understands. And after that, what's happening in after the test is that Keptn is pulling out these metrics and combining them into a final score. So there is actually a lot of things that are going on there. I mentioned pipeline, and I just want to sum it up. You don't even necessarily need a pipeline. It doesn't matter from where you trigger your test, because if you're a performance engineer, maybe you do not care about pipelines yet. So simply start your load test as you do always have your APM tool, monitor your application as it's already been done anyways and that you can add the third component, completely independent from that automatically evaluates score by simply triggering this single event, telling Keptn when the test was performed from here to the end and where the SLI, SLO dashboard is.

    Joe [00:23:12] Very cool, so when you talk about scores like is it a main overall score, like 80? And then when you dove in it, it breaks it down to where you scored bad or well, and then it gives you suggestions on how to actually fix it?  Or is it just like a…can you just talk a little bit more about what you get with that scoring?

    Roman [00:23:28] Yes, absolutely. So what you're going to end up with is I will stick with Keptn now, since it's what we choose here and what you see after the test, after it's been evaluated, is in the so-called Keptn's bridge. You see all the individual metrics that you defined that are important and necessary and interesting for a load test. You see what value they have and this is where it already happened. So you have to previously define if you want to have, for instance, the average or median or 90 percent percentile of the response time, error rate, etc. This is something that you have to predefine. This is the so-called SLI, the service level indicator. So what to measure and how to measure it. In addition, you define a goal. So, for instance, let's stick with an example. The response time should be less than 300 milliseconds in 90 percent of all cases. So this would be a proper SLO. So Keptn will evaluate then and figure out if this individual goal, which is just the part of the overall score, is actually met or not. And then you have also the option to put in some weighting that you want to have some SLIs being more important than others. And overall, if you have hundredths of SLIs with no weighting, so all are similar, and then you would each individual SLI would simply be a score of one. And you always, no matter how many SLIs you evaluate or define, you always, always end up with a score from zero to 100. And to determine whether the test fails or not, you can also individually define if you want the test to be green. If the score is 80 or above, or if it's whatever you need it to be.

    Joe [00:25:35] So when I spoke to Andy about Keptn, like fairly new when that first came out, and so are you actually using it in production or is this a proof of concept? And like can you attest to how well Keptn has been working for you in the real world?

    Roman [00:25:50] Yes, it's constantly evolving and we are actually using it already for one of our customers and we're using it to evaluate our performance tests, which we do, and test stages, Keptn has a lot of other use cases. This what I'm talking about is the so-called quality gate use case. If you go to the Keptn's home page, you will see topics about auto-remediation, autonomous cloud. So it's all about deciding whether a new build is in production being better or worse than the other blue-green, deployments, etc.. We're not doing this yet. We are focusing only on the automated quality gate approach to decide whether our past was good or not. And we do this in test stages. And as I mentioned before, what was really necessary to make it usable in the real world is, as you said, was having a nice way of defining these SLIs and SLOs with this so-called dashboard approach that we came up within the workshop because in the previous version, you had to define all this manually in YAML files. So it would be a lot of effort to maintain this and to initially set it up. For instance, for a small load test with a couple of charts from five to 10 web for different server metrics and each I already end up with a YAML in behind and I don't care about any more of which has more than 500 lines of code. And you don't want to write that manually. And since this is available now, it's really feasible, and with not a lot of time, that's necessary to evaluate your load test with this approach. So, yes, it works in the real world for us.

    Joe [00:27:41] Very nice. Speaking of the real world as well, it does seem like it's kind of new to get teams on board to do the animated quality gate-type approach to performance. So any lessons learned that you think people have been struggling with that you say do they go if they just knew about this, they would make an easy transition to this type of approach.

    Roman [00:28:00] I think actually the entire community around Keptn is working hard on this to make the installation, etc. really easy, I think with only one or two lines of code, a code, you can actually set up Keptn on a new server and that Keptnn runs on Kubernetes. So you would think that you have to set up Kubernetes from the scratch and then deploy, Keptn there, etc.. But no, I can say that it's a lot easier with the installation that the community of Keptn provides here and all this gets spun up automatically. And so for us, it was actually the main challenge in the corporate environment where you may have not access to anything from anywhere to care about firewalls since there is API communication that's necessary. So Keptn needs to reach your APM tool. And if you want to have it triggered from a pipeline, you also need to have a communication between Keptn and the pipeline software there. And of course, you need a server somewhere to set it up if you want to have it on-premise. But this is actually just something that you have with any kind of software. So I would say it's pretty straightforward and easy to give it a try.

    Joe [00:29:24] So only a few months out from 2021. Any thoughts on the future of performance engineering or any trends you definitely see happening that folks need to be aware of?

    Roman [00:29:35] That's an interesting question. I have seen trends and I have a lot of wishes. I think one trend is actually getting more feedback from production integrating SLIs SLOs, taking them also using them in performance tests. That’s a clear trend that I see kind of fusion approach for site reliability engineering. And so a clear trend is to involve performance tests in development as early as possible. But it's a trend and most of the people are still doing it. I think, as you mentioned earlier, that you develop your software, and then later on at a certain degree of involvement, maybe even just before it hits production, you do a load test. I still see this quite a lot. And I think this is going to change over time. And from the other perspective, from the technological perspective, I think what's really, really going to be exciting is browser-based load testing. There are already solutions out there that can do browser-based load testing compared to the protocol based tests, which obviously come with more maintenance, etc. But there is no solution out there that works in all environments. And by all, I mean cloud-based as well as on-premise, which is really important as well. So I'm really excited about that. And I've seen a lot of people working on getting more feedback from production also in terms of usage. So the thing is that you want to reproduce the real world in your load tests, right? So from your APM tools and also from your logs, etc., you want to recreate the scenarios that are actually happening in production, in your load tests. And there are actually people out there, for instance, that Neotys who are trying to make that happen. I'm really excited to see that.

    Joe [00:31:51] Okay Roman, before we go, is there one piece of actionable advice you can give to someone to help them with their performance testing efforts? And what's the best way to find and contact you or learn more about your company and the services you offer.

    Roman [00:32:05] All right, and advice is if you are already and performance engineer, I think that your role will become more and more important. So you can be excited about that. I personally am excited a lot to work in such a great and interesting field. And maybe as an advice, it's a fun thing because as a performance engineer, if you think about becoming one, sometimes you are seen as a hero. If you are actually able to save an application, a lot of trouble of going live with a bad version. But sometimes you may also be seen as a villain if you stop an important application from going live. And it's because of significant performance issues that came up. But in the end, I figured out you will always…the people will always be grateful, even if they are not in the first place. So even if you may look like a villain in the first place, you will eventually maybe be seen as a hero, just like Batman. So hope for anyone who enjoys the job as much as I do as being a performance engineer. Keep that in mind. What you're doing is really important. It's a really great field and it's going to become a lot of attention in the next years. And I promise that. And the best way to contact me and my colleagues, which I would like to thank at this point as well, I would like to say thank you to Fabián, Alpha, and Christian, such great colleagues and excellent performance engineer. If you want to reach out to us, you can reach out to me on LinkedIn. My name is Roman Fastenal. Spells R-O-M-A-N F-E-R-S-T-L or you can simply come to our web page at triscon/it.com and we offer services all about performance. So APM, performance testing, workshops, trainings, etc.. If you simply want to have a chat, just make sure to contact us.

    Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Matt Van Itallie Promotional graphic for a DevOps toolchain podcast episode featuring CTO insights on the impact of Gen AI on DevOps, with guest Matt Van Itallie, supported by Bugsnag

CTO Insights on Gen AI’s Impact on DevOps with Matt Van Itallie

Posted on 03/27/2024

About this DevOps Toolchain Episode: Today, we'll speak with the remarkable Matt Van ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Sideways Test Pyramid, WebDriver Visual Testing and More TGNS115

Posted on 03/25/2024

About This Episode: What is a Sideways Test Pyramid in testing Have you ...

Frank Van der Kuur Mark Moberts Tatu Aalto

RoboCon Recap: Testing, Networking, and Building with Robot Framework with Tatu Aalto, Mark Moberts and Frank Van der Kuur

Posted on 03/24/2024

About This Episode: Today's special episode, “Robocon Recapp,” is about the insights and ...