Speeding Up Automation Analysis with Ruslan Akhmetzianov and Natalia Polyakova

By Test Guild
  • Share:
Join the Guild for FREE
Ruslan Akhmetzianov Natalia PolyakovaTestGuild_Automation Feature

About This Episode:

How do you handle how to debug a bunch of automated testing failures? Nothing is more frustrating than trying to figure out why some tests failed. In this episode, Ruslan Akhmetzianov, DevRel Lead, and Natalia, a quality engineer at Qameta Software, share ways that you can speed up your automation analysis. Discover how to handle flaky tests, what tools and approaches help speed up automation, and much more.

Check out more ways to help your automation testing management efforts with Allure TestOps: shorturl.at/cdhLP

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Ruslan Akhmetzianov

Ruslan Akhmetzianov

Ruslan Akhmetzianov, a DevRel Lead at Qameta Software, TestOps ambassador, DevOps enjoyer, and conference crawler.

Connect with Ruslan Akhmetzianov

About Natalia Polyakova

natalia poliakova

Natalia has been working in testing for almost four years. Started as a trainee and finished as a lead to leave the QA path for automation. Now she is a quality engineer at Qameta Software and works as a mentor for Junior QC engineers at a tech university. She believes testing should be a part of the whole team's work, not just testers'.

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:04] Get ready to discover the most actionable end-to-end automation advice for some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:20] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. Today, we'll be talking with Ruslan and Natalia all about speeding up automation analysis. As an automation engineer, I know how hard it is to analyze all results, especially if you have a lot of failures. You got to learn how to save time. Analyzing Automation Test Results in this episode. So you want to stay around all the way to the end? If you don't know, Ruslan is a DevRel lead at Qameta Software. He's a test ops ambassador, a DevOps enjoyer, and a conference crawler, and he's been our guest on the show before. So he brings a lot of knowledge. We're excited about this. We also have Natalia you joining us, who's been working with the test industry for almost four years. She started as a trainee and finished as a lead to leave the QA path for automation. So she's the perfect guest for the show. She's now a quality engineer at Qameta Software and works as a mentor for junior QC engineers at a tech university, which is really cool. And she believes testing should be part of the whole team's work. Not just testers. Wholly agree, definitely agree with that. So you want to stick around all the way to the end to listen to all the knowledge are about to drop here. So let's get into it.

[00:01:32] The Test Guild Automation Podcast is sponsored by the fantastic folks at SauceLabs, their cloud-based test platform helps ensure you can develop with confidence at every step from code to deployment, to every framework, browser, OS, mobile device, and API. Get a free trial. Visit testguildcom.kinsta.cloud/saucelabs and click on the exclusive sponsor's section to try it for free today. Check it out.

[00:02:00] Joe Colantonio Hey. Welcome to the Guild.

[00:02:06] Ruslan Akhmetzianov Hey.

[00:02:07] Natalia Poliakova Hi.

[00:02:08] Joe Colantonio Good to have you. I guess before we get into it, is anything I missed in either of your bios that you want the Guild to know more about? Sometimes I botched them.

[00:02:15] Ruslan Akhmetzianov No, I guess you did well. That's awesome.

[00:02:18] Joe Colantonio Cool. All right, great. I guess before we get into the topic then, a lot of times automation engineers struggle with a lot of things, but the biggest things are results. Just curious to get your input on what you think about that statement. Do you see that as well and what type of test results and things like that do you think a lot of automation engineers are struggling with, so either one of you wants to start?

[00:02:39] Ruslan Akhmetzianov Yeah, I believe that Natalia should start because she's actually a quality engineer, so she knows about the test results and so on. And I just love to talk. Natalia, please.

[00:02:54] Natalia Poliakova Thank you, Ruslan, hello, I am a tester to test some products and find a bug where you can just create a bug report. So the whole development team can understand what went wrong and how exactly must work. So the automated tests are not as productive as manual testers and frequently not able to give the development team clear information about the bug. Developers or testers should read the test results to find out if an automated test failed because of a real bug or because of something else. It can be a bug in the test code or it can be a temporary zero error and etc.

[00:03:37] Ruslan Akhmetzianov So long story short is that tests are actually dumb, so people are usually not. And at this point, we just get the whole story about a lot of new processes and entities popping up as soon as we keep automating stuff and trying to keep this automation under control. So Natalia, when the whole pass from manual testing to DevOp, to test tops, which we discussed last time with you, Joe. I believe she can tell us about it, about these entities and processes which pop up actually like to make to enable actually reading the results. Because when you just use a testing framework in an IDE or on a pipeline, all you get is a huge stack trace. Nobody wants to deal with that. Natalia, tell us, what do people actually use now and what's necessary to make automation work?

[00:04:34] Natalia Poliakova Yeah. Automated test results include failed scenario steps, the crisis, and error messages, logs, and attachments, a lot of all. But all of this stuff can be handled only by testers and developers. In the manual process, testing is time-consuming, and in an automated distance test lot the number of results increases, and analyzing everything take a lot of time and all. What can we do? Yeah, we, fortunately, can automate part repeating work. And of course, it's about regression tests and about scientists' use. We can do what? For example, we can handle like this first of all.

[00:05:19] Ruslan Akhmetzianov So I just wanted to add that as soon as you get regression and these are usually the first steps nobody wants to run the same test each time a feature goes out or a release is coming closer. Like nobody loves routine except automated tests. They do love that. But at this point when you get a talent of automated tests which like should work for 100 of the usual manual scenarios because you want your automated test to be small, atomic, clear, easy to run and read, and to fix. But at this point, when these tests pop up a lot of them and you start ___ them again and again. So just a couple of weeks ago, I've been talking to people from QA engineers from mirror team like a collaboration tool. I guess most people know about that tool. So there are 3000 thousands of tests 400 times a day. There are 600 of engineers working on the product and they create somewhere around 200 pull requests. So they make around before a pull request and after what they want. This amount of test runs, you definitely get something wrong. Some tests are like, how do you say they are false negative, or they're just unstable. In speaking of unstable tests is a huge topic.

[00:06:50] Joe Colantonio How do you lose? How do you keep the developers engaged? There are a lot of times. Like this test fail. I have no confidence in them. I'm not looking at these reports' results. I can't trust them.

[00:06:59] Ruslan Akhmetzianov Yeah. So there is also a point when me and my colleague Artem, who is a co-founder of Qameta, we've started speaking not about trust, but there is also about faith into automation. So you should be faithful for automation because trusting is something you need to get proof but to make your automation evolve, you need to have faith to believe that it will work. If you put a lot of effort into these that's how it works. And as soon as you start getting flaky tests and like we all know, what flaky test is, is a test which can fail or not fail depending on something that happens under the hood. Is the test broken or is the code broken? Or maybe it's infrastructure, or maybe it's that astrology or bad luck. Nobody knows how what happens when a flaky test becomes flaky. So that's something to discuss because the reasons for such behavior are often not obvious. And there is also, I believe for a lot of the testers read like for example, Uber's blog about engineering. So there is a whole team working on flaky testing and they build huge frameworks because like they've got dozens of thousands of tests and they always fail like each run something fails. And at that point, you don't understand like is it a bug or is it a better time to run tests. Or maybe the machine like maybe .... In a cloud just works inefficiently. So and at this point, we have to think about how to save time working with flaky tests because it is time-consuming like investigating something takes a lot of time. So I believe Natalia knows a couple of ways to work with flaky tests, so. Natalia, please.

[00:08:53] Natalia Poliakova Well, Frankly speaking. Some people say that if you have a flaky test you should fix it. And there are lots of ways to fix flaky tests, but sometimes it's not a good way. And when we want to save time, we can do at least, two things. First of all, they are able to rerun all failed tests. This opportunity is already realized in the most popular frameworks, JUnit, Pytest, and testNG etc. which is several times to run the same test and hope that in one of the runs it will replace. So if you have a flaky test run can help us to understand that occasionally fighting is not a real bug. And the second thing that we can do, it's muting of current in tests. It always can be called tests ignoring that helps us to not pay attention to our unstable test reports and usually muting our current is one of the test frameworks tools too, so we can find it in JUnit and in testNG and you always do. We just need some annotations or methods to make some test results visible for launch. What I think of this test we just remove sanitation and then test a real work life.

[00:10:21] Ruslan Akhmetzianov Like a charm. Yeah.

[00:10:23] Joe Colantonio Did you ever delete the test just before it runs? I'm just curious. I know you gave the option of rerun. A lot of people freak out when I tell them to rerun that test. Like all you're skipping over a real issue. I like quarantine. I found that really effective. But how about deleting? Do you find teams just maybe create a test for everything that necessarily maybe you don't even need the test and delete it and that would save you some time?

[00:10:44] Natalia Poliakova Well, that's a tricky question, I believe. Deleting a test is a good way when the test does not work for you. That's the only case when you should delete it, as far as I believe. And as far as I understand, if a test fails and then you delete it, you're just, that's the real ignoring of a test. It's not a quarantine. That's like something. This is the way I tend to work with my issues in real life. I just ignore it like, oh, there is a problem, I care, I can get some sleep. Yeah. So maybe it will just go away. But it won't. So and another thing it's important to point out is actually a failing test is a test we should love because it actually does its job. Because if we got the green pipeline and a lot of bugs. There is something wrong. Analyzing a result is important. And especially focusing on real issues opened by failures is good, but so deleting a test is good. When, for example, I don't know, like when you actually don't need to test something I don't know like a feature has been deleted of course, then we delete the tests. But deleting a test because it fails it's just the same as laying off people who are actually working.

[00:12:06] Natalia Poliakova And we want to forget about test coverage. We shouldn't forget about this.

[00:12:13] Ruslan Akhmetzianov Yeah. Test Coverage should just increase all the time. Not just it's not going down. Right.

[00:12:18] Joe Colantonio Right. Right.

[00:12:19] Ruslan Akhmetzianov Yes. So at this point, I just wanted to say that these two basic approaches brought by Natalia are really basic. There are a lot of complicated and complex solutions to work with flaky tests. But if you are trying to save time and to work efficiently and you are starting to have all these issues, don't go for complexity. Like I don't know if you know, like Venkat Subramaniam, a well-known speaker from the United States and consultant and so on. So he has a talk about Java. The talk is called Don't go from complexity run because complexity is something you don't want to be present in your software pipeline or in your code or anywhere because the simpler the solution, the better. And at this point, this is why we actually worked on implementing these two simple approaches in Allure TestOps because as soon as you try to build a complex solution in a prebuilt system, it won't much like it won't fit a lot of teams. So each team has specific needs and so on. It's more about working inside in-house software development of tools. But if you build a platform, you try to build something basic which will work for almost everyone. So this is why Allure TestOps supports the functionality of rerunning of tests in frameworks, for example. But as soon as you start rerunning tests from JUnit or TestNG, or Pytest, you just get this last result and you have to watch for it like it's past. Okay, should I run it because it might fail next time? So you run it and then you just. Okay, now it still passed. Should I rerun it again? And then when it fails you just what is the percentage of failures? Is it a flaky test? Because in some teams people say we call it a flaky test, a flaky test as a test that fails more than 10% of the time. And if it's like 10% of failures, it's more green than red. So we say it's like it's an okay test, but something goes wrong. In other teams, just a single failure in the whole history of test runs makes the test flaky. So that's not a topic to discuss, but that's like really up to the team to believe what a flaky test is. And this is like another issue for like tools builders like us because we can't make a single flaky test function, or fix a flaky test because a flaky test for each team is a like is a separate thing. What we do is provide capabilities for people to see the whole history of reruns. If you have got a test case in a launch which is failed and then it was rerun automatically, for example, by a framework, you just get all these lines of passed, passed, passed, failed, passed, failed, failed. And then you can look at the launches at the time of the launches and everything. This is how we support the rerunning tests because if you got ten thousand tests, of course, you don't want to run ten times, ten thousand tests ten times. Re-running is a good function. That's awesome. And the same works for muting like a muted test should be muted. And for some time you should know the time scope of a test to be muted because if you are just muting it and forget about that, something goes wrong as far as I believe.

[00:15:55] Joe Colantonio Yeah. I was going to ask Natalia if you did mute something or quarantine, how do you remember to unquarantine it? In the day when I was doing animation, you had to actually remember. I need to get rid of that tag did you use something like Allure TestOps? Make it easier for you to quarantine and unquarantine things?

[00:16:10] Natalia Poliakova Yeah, of course. It's just one button. You can just click on it.

[00:16:15] Joe Colantonio Oh, cool.

[00:16:16] Natalia Poliakova That's all.

[00:16:17] Ruslan Akhmetzianov Yeah. And so in order for you to not forget about the muted test, you actually get the number of muted tests in each launch in the results. So they are muted, but they are still like you get like 5000 of automated tests, a hundred of manual tests 90% are green, 10% or 8% like broken or failed or something. And then there is like this 2% of muted tests, which are just the gray pixels on your monitor, but you remember that they're a muted test. You just take a look at them and then, oh yeah, we fix that.

[00:16:53] Joe Colantonio Nice. Can you do custom tag as well? So you have like eight sprint teams and you now this Sprint Team Z has all the tests has quarantine so they don't have any coverage on check. Does it give you any type of insight like that?

[00:17:06] Ruslan Akhmetzianov Well, if I got the question right. So we do support custom fields. Oh, well, we've got each test case may have a number of standards or normal text owner, feature, or microservice or something. I don't remember. Like maybe Natalia remembers the structure of this case or there is also an opportunity to create custom fields which you just name it and use it as you wish you would make create rules for the custom fields or just ignore it or your strings as variables there.

[00:17:41] Natalia Poliakova And you can integrate them how to Automate tests.

[00:17:47] Ruslan Akhmetzianov Yeah. As an annotation in your code. So you just work with it within your code and then you get it in the UI for managers or testers. And then you can create any dashboards or results or data sliced by these custom fields. If you want to get all the muted tests in the runs or in the whole test suite or in this specific test plan, it's actually possible.

[00:18:12] Joe Colantonio Nice. So, we mentioned Allure testOps a few times. And for the people that didn't hear that previous episode, though, and they're like, what is Allure TestOps? I guess, what is it in nutshell for some to get people up to speed if they missed the original?

[00:18:26] Ruslan Akhmetzianov Yeah. Okay. Allure TestOps is I do call it a DevOps-ready testing platform. So it's a tool that was built in three or four years ago when testing management systems arise. They just popped up everywhere and everyone started using TestRail, which is an amazing testing management system. So but testing management systems are usually focused on manual tests they provide you with an amazing experience with testing scenarios, working with test cases like putting them in the right order, clicking everywhere, getting the results and then you get the beautiful dashboard for all that. But as soon as you want to get an automated test integrated with a test management system, you have to it's simple, right? An integration for an API like test management system, they provide you an API. How we founders thought about how much time people spend writing integrations. So why don't we create a system which is actually already integrated, out-of-the-box with a huge number of it like programming languages, CI/CDs, issue trackers, and now also test management systems because Allure TestOps has an interface and has a whole block to work with manual testing and it is a testing test management system? But when people want to work with automation, oh, that's already not in a nutshell. Sorry, Joe.

[00:20:01] Joe Colantonio I'll have a link to the episode so people can actually listen to the full glory of Allure TestOps.

[00:20:06] Ruslan Akhmetzianov Yeah, but long story short, you just get Allure TestOps out. You just plug it with your testing infrastructure because usually, you've got like three frameworks and two CIs, one for web, one for backend, one for like integration testing, then two CIs, one is master or main CI and the second is testing CI or something like that. And then you just plug it in. You don't develop anything, you don't maintain anything. That's why Allure TestOps was created.

[00:20:38] Joe Colantonio And Natalia, I assume you use Allur TestOps in your day-to-day work does it help you have used anything before that and now you're using it and maybe you see oh this really helps now developers are really on board or anything like that. Any other benefits that you've seen?

[00:20:52] Natalia Poliakova Yeah, of course, it helps when I worked in the previous company, we Integrated Allure TestOps first of time. That was the first time when I knew about it and it was really cool experience because we hadn't normal processes for automated tests after Allure TestOps integration or we understood that there are really cool tools to analyze test results and to just rerun test results from one place and all a lot of different repositories. So CI/CD create something very easy to understand for our manual testers because I was a manual tester and my colleagues, my team was manual testers. It really helps. And now, of course, we use Allure TestOps too. I tested it with the help for it.

[00:22:07] Joe Colantonio Right, right. That's cool. So it almost sounds like a collaboration platform as well because all in one location in the whole team go that one location like, you said, you're not pulling from all these 10 different things and try to say, oh, why did this happen? So I like that. So I guess what are some other approaches then to help speed up maybe automation or analyzing of failures? You mentioned something about a lot of these tools are made for not necessarily automation. So they have defects and severity. Are those things also can be applied to automation? I guess. What kind of approaches can help speed up automation is my question.

[00:22:39] Natalia Poliakova Of course, flaky test scenarios are not the only thing, of course, it's can be automated and about other things. It's possible to automate some processes to___ like Allure TestOps, of course, and libraries. They will also provide the tools for bot roll detection settings and some tools for each of you in the different kinds of attachments, test results, history, and the individual's motivation, including tools. So based on AI, and perhaps the most useful tool, as I can say, is automated log detection. Sometimes stack trace or error messages do not contain clear information about that. But it can be some error messages or stack traces texts. Some texts are with readable definitions. For example, if you have an error that sounds like a selector to some login scary express couldn't be found. What does it mean for the development team? Actually on our team? But if you replace this error with a strong selector or element is not visible. All team can understand what happened.

[00:24:01] Joe Colantonio So just a quick follow-up, sorry. If so you had 100 tests and 100 tests fail and so 50 of them failed. Does it tell you out of those 50, maybe it's just one error that caused all 50 failures? Does it give you that type of information as well? So when someone comes in in the morning and goes, oh my gosh, I need to debug 50 tests, does it just say I actually just need to debug this one issue?

[00:24:23] Natalia Poliakova Yeah. Defect detection. So some rules help us to not only text that is readable or what. Also, it helps us to define that we have only one back-end. A lot of tests failed because of that.

[00:24:45] Joe Colantonio I love that. I think that would save teams a lot of time. Ruslan?

[00:24:48] Ruslan Akhmetzianov Yeah, that's actually why the defects feature in Allure TestOps was one of the first to be like developed and marketed because everybody suffers from a huge number of failures for one single reason, like the page loads for a minute and then all the tests fail because they just don't see the locators, they don't see the like buttons, they don't see anything. They don't have eyes, actually, so they never see anything. But there is nothing at all. At this point, having a defect that hides all the failures like grouped by like some nice regular expression, I don't know if it's possible to say nice regular expression, but in any way, so just user rejects. Something seems simple to group all the tests and hide them in the defect. So the other feature of like which we love. And that's like how it works. We are using doc footing, so we use our own tools and we love that you can link the defect with an issue checker. So developers don't like tester's tools, that's a fact. And managers don't like testers tools and developer tools also. So this is why everybody loves JIRA because nobody loves JIRA and everybody loves JIRA. That's how it works. Jira is an amazing thing in the world because everybody uses it and there is so many things to discuss about JIRA we can make a whole new series about JIRA stories but at this point, you want to put something out of the Allure TestOps to show the defect is being worked on. And the link between the defect and the issue works two sides. So you can see the status of the issue in Jira or another issue checker within Allure TestOps. And then in Jira, you can see how many test cases are linked to the issue. This is how defects work in Allure TestOps. And this is a couple of steps further than just providing human-readable mistakes from such cases.

[00:27:06] Joe Colantonio And also this is to do mapping then? Say that they fixed that defect because you have that mapping then, okay, we just need to run this for a test run and 3000 tests every 4 minutes to verify if it worked. Right?

[00:27:18] Ruslan Akhmetzianov I don't know if I would say that this automated launching on issue closure is more about configuring a pipeline than using the test ops.The test ops is more like a monitoring tool. So it provides all the interfaces to run everything and to see everything. But how it works by default is as soon as you close the issue, the defect will be closed automatically. If the issue does not fix anything, you will just get all these failed tests back in your backlog, and let's reopen the defect, maybe?

[00:27:57] Joe Colantonio Gotcha. Right, right. All right, so bubble's up. The defect wasn't really fixed. So how about the quality gate? A lot of people spent a lot of time in like Jenkins or CircleCI creating quality gates. What are your thoughts on quality gates? Do you use them? Is it part of your solution? Natalia, Any thoughts on quality gates? Automated quality gates.

[00:28:17] Ruslan Akhmetzianov Yeah. Well, I believe that at this point I will just take the word to speak about quality gates. So there is no such feature by now, but we are thinking about a way how to implement it. So speaking of quality gates That's actually well, this topic popped up in my mind because when I have been talking to mirror people, they just developed and plugin for Bitbucket, which actually is a quality gate. So when you push your commit like or when you merge the branch, so there is an automatic system that looks for non-severe failures for example, and it says, okay, let's deploy. And if there are some like a lot of failed tests and these failed tests are severe, it just automatically rolls back the push. And this is how I believe quality gates should work because I've been making a talk at a conference recently on the Quality Gates topic. So the funny thing is when you try to Google, what is quality gates, you know what you get? In 2022, you get the definition, which sounds like a quality gate is a meeting where all the people in charge meet and go through a checklist. I mean, really there are still a lot of like articles that highlight quality gates like this. So it should not happen in the world of 2022. Everyone is remote, nobody wants to make a call with a checklist. So think about automating quality gates. And to automate quality gates, you need to gather the results. And this is what we are working out.

[00:29:55] Joe Colantonio Nice. Very cool. Love to hear that. So almost out of time. But before we go, I always ask this one, this last question. It is one piece of actual advice you can give to someone. So an actual piece of advice to someone to help them with their automation testing efforts and what's the best way to find or contact you? So, Natalia, do you have any actionable advice you give to someone if they're starting off automation or maybe how to make their automation easier or better?

[00:30:21] Natalia Poliakova Well, automated tests can help us to increase the speed of testing and to have quality assurance can be provided by people, not by automated tests. And people using that right tools to analyze automated test results can really improve their product and make the customers happier. And that's why Allure TestOps is oriented on this work analytics a lot. The main aim of QA is bug prevention and analyzing how to make product quality better and we can achieve that only by using the results and not just by writing and running the tests. But we should analyze them and to understand what can we do to make our product better.

[00:31:13] Ruslan Akhmetzianov Yeah. As for me, my advice is extremely simple. So there is time and money and you can get more money eventually, but you never get more time. So when you choose tools and approaches, and when you are working on something, aim for saving time, not money.

[00:31:31] Joe Colantonio Love it. Great. Great. So have all the links to us in the show notes. But if people want to actually learn about we talked a little bit about Allure TestOps. If they've never heard of that, how can they learn more about that?

[00:31:39] Ruslan Akhmetzianov Oh yeah, it's actually Qameta.com. I believe Joe will just add the link somewhere. There is a website and there is a GitHub community created for Allure report actually so the repository is for Allure Report. But there is a field where you can ask about Test Ops and provide any ideas or questions or anything. And all the team of maintainers and me personally are trying to look into all these topics and if there is something interesting, we will come for you. We will thank you for the feedback or for the ideas and tell you how we can achieve that.

[00:32:20] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head on over to testguildcom.kinsta.cloud/a425 and while you're there make sure to click on the try it for free today link under the exclusive sponsor's section to learn all about SauceLab's awesome products and services. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:33:03] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you. Head on over to testguild.info And let's make it happen.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Alex Kearns TestGuild DevOps Toolchain

Leveraging GenAI to Accelerate Cloud Migration with Alex Kearns

Posted on 12/18/2024

About this DevOps Toolchain Episode: Today, we're diving deep into how you can ...

Three people are pictured on a graphic titled "AI Secrets You Should Know." Set against a striking red background, the image features the ZAPTALK logo in the top left corner, highlighting discussions on AI and automation.

The Secret to Embracing AI and Automation (ZAPTALK EP 02)

Posted on 12/17/2024

About Episode Join Alex (ZAP) Chernyak, Joe Colantonio, and David Moses in episode ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Top Gift For Testers, 70% Problem, Test Coverage and More TGNS144

Posted on 12/16/2024

About This Episode: Do you know the perfect Holiday gift to give that ...