Next Generation Functional Visual Test Automation with Tobias Müller

By Test Guild
  • Share:
Join the Guild for FREE
Tobias Müller TestGuild_Automatio Feature

About This Episode:

With DevOps, more releases than ever are being generated, leading to many unexpected customer-side problems. What do you do? In this episode, Tobias Müller CTOs at Progile and lifelong tester, shares his experience with automation, especially in regulated environments. Discover what problems regulated customers face because of DevOps, how to handle typical challenges in regulated markets and how next-generation functional visual test automation can help.

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Tobias Müller

Tobias Müller

Tobias is a CTOs at Progile & Tester. His goal is to help shape modern development methods and drive software quality with his distinctive expertise, high-quality standards and drive.

Connect with Tobias Müller

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:04] Get ready to discover the most actionable end-to-end automation advice for some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:20] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. Today, we'll be talking with Tobias Müller, all about next-generation functional visual tests automation, continuous validation, and much more. If you don't know, Tobias is a CTO at Progile and a seasoned tester. His goal is to help shape modern development methods, to drive software quality with this distinctive expertise, high-quality standards, and drive. He actually led a team who developed a really cool testing platform called TestResults.io that I've been using for the past few weeks and running a review on they'll be releasing this week that you definitely want to check out. So really excited to have him on the show. He has a lot of experience. You don't want to miss it. Check it out.

[00:01:03] The Test Guild Automation Podcast is sponsored by the fantastic folks at SauceLabs, their cloud-based test platform helps ensure you can develop with confidence at every step from code to deployment, to every framework, browser, OS, mobile device, and API. Get a free trial. Visit testguildcom.kinsta.cloud/saucelabs and click on the exclusive sponsor's section to try it for free today. Check it out.

[00:01:03] Joe Colantonio Hey Tobias, welcome to the Guild.

[00:01:38] Tobias Müller Hey, thanks for having me.

[00:01:40] Joe Colantonio Awesome to have you. I guess before we get into it, is there anything I missed in your bio that you want the Guild to know more about?

[00:01:45] Tobias Müller No, it was perfectly fine. Just better than I could ever express it.

[00:01:49] Joe Colantonio Very nice. So I guess before we get into the main topic, we had a few conversations before this and you mentioned something about Dashboard and that was kind of cool that a green dashboard is bad. I think it's kind of counterintuitive. So I just want to get your explanation why you said that or what you meant by that.

[00:02:05] Tobias Müller Sure. The thing is, I learned that from Alex, from one of our speakers. And the reason is actually that the green dashboard means, like, everything is okay for 90% of the management at least, and the red dashboard say something is not okay. And they actually had there was a customer this week where they had like 120 test cases and 21 test cases were failed. So there was a little red pie in the top right corner and they actually got nervous and testing got the attention. So they actually said like it has to be something broken and stuff like that, but nobody's actually looking into the details. And that is why we say that the dashboard is not enough. If you look at the dashboard in green or red, it doesn't mean anything because it just means yet, none of the test cases failed or none of the test cases passed. That's fine because as soon as there's something reddish on the dashboard, people scream and they start to get interested in testing stuff. The obvious usually say like, we need to get that fixed. Even so, they actually don't need to fix it because it's just a minor finding somewhere. And that is why you seem to get a lot of insights from a dashboard. But that's actually not true.

[00:03:00] Joe Colantonio Absolutely, I totally agree with you. So I think this is one of the problems, not problems one of the side effects of DevOps trying to release quicker and faster. I know you have a lot of experience, especially what I love about you all is you have a lot of experience in heavy regulated markets and enterprises. So you're not just testing simple websites. So I'm just curious what other types of issues your customers you've seen have been facing within DevOps?

[00:03:23] Tobias Müller Yeah, some of the customers actually are not even adopting DevOps yet, so you need to think about it in regulated environments. You need to validate everything. So if you change from one version of your build server to another one, you need to revalidate your whole build environment is not something is changed, somebody is changing. Like, Yeah, I have an update on my laptop and I can just use this and you push that somewhere in the cloud and can run it. Everything needs to be validated. It goes like up to the framework version and up to the operating system version that you use to build the system. Actually, you need to prove that the system that you build is equal to the one that you built previously if it has the same version number and stuff like that. So some of our customers are not yet in the DevOps world because of that? For the ones that are actually the problem is that most of the tools out there don't reflect on regulations. Most of the time is like an add-on. So you add something like electronic signatures on in the process. But the problem is traceability can also be handled by some of those tools. But the biggest pain is really to validation. So whenever you change something, what I just mentioned is like you need to validate it in the chain again. And that is where you need to come up with clever ways to say like the surface that you need to validate has to be as small as possible. And that is where it's another architectural term that I do not like, but it's microservices where you say, like I put all of the required stuff into a small service and that is fixed. We call that a frozen solution. So it doesn't change over time and all of the additional features are just added around it so that you don't need to validate that again because the core functionality stays the same. So like you can say the execution engine stays the same. So they struggle with that actually, they struggle having like DevOps means lightning fast and always being on the latest toolchain. So that is what developers expect from that. Like if there is an update, you get new get packages and they just get applied to the solution. And the same is true for the built-in like the CI/CD pipeline. You always keep that up to date. You are in the cloud, you actually get updates all three months anyway, maybe like for example in TFS and DevOps server for Microsoft and that is what they actually struggle with. Also by the way, with Windows 10 as an operating system in like we do, we do it for customers with a blood-typing robot actually. So if you go to the hospital and they do blood typing, they actually will use that robot. At least 90% of the hospitals worldwide. And they actually have the problem with the Windows 10 version that is solid because Windows is usually updated. And in those environments, you cannot update it in the usual frequency because it needs to be revalidated, which takes a lot of time and the update frequency is faster than the validation frequency can actually be. They actually struggled with the frequency.

[00:05:49] Joe Colantonio I used to work in a regulated environment and used to have a valid each tool and it was like a big project to validate one tool at least, that you just can't pull things randomly, create a solution and ship it out there. So definitely an issue.

[00:06:02] Tobias Müller Yeah. And think about most of the modern test tools are actually on the web.

[00:06:05] Joe Colantonio Yes.

[00:06:05] Tobias Müller So you don't even know if they are updated and that is the real problem for them.

[00:06:09] Joe Colantonio Yeah. So people didn't realize that that's one of the reasons why we use a vendor tool when we started, because the tool was actually validated. So it just helped where you didn't have to validate an open-source tool. Not that open source was bad just in this environment, it just didn't make sense. So you bring up a good point though, too, that it's not just a browser, right? Because if you're working in one of these markets, I assume like MedTech and pharma and all these types of environments that they have more than just web apps. So do you find that a lot of people struggle with automation because maybe they chose a tool that was popular, but that tool just happens to be just for browsers? And then, like you said, you have to run against all these environments that didn't meet their needs.

[00:06:46] Tobias Müller Yeah. I guess the biggest pain point is actually that they don't have a tool that they can support all of the different environments because most of the time, like the company, starts with Windows, then they have a problem with Windows 10. Now they move over to a different operating system, like a Unix base because that is controlled and you have much more control of it and the current tool is lacking functionality of that. Now they have two tools in the set that they need to validate. Validation just got doubled the effort. The validation just got doubled and they started more basic stuff like that. And then yeah, and that's really I mean they are updating to an HTML-based interface which is still hosted locally because again it needs to be under version control, but that's another new technology and most of the tools actually support one or two technologies, or most of them actually only one today like web applications. But if you have a bunch of different technologies on the same analyzer and you need to have one single solution that is actually really struggling.

[00:07:32] Joe Colantonio So this is why I love talking to vendors, is that you have a lot of customers, and you see things in the real world. I assume the solutions you come up with address a real need. You've been seeing that. A lot of people are showing with. And so I believe testresults.io is one of those solutions that address this because it does something a little different than similar tools where it uses more like a visual validation type of approach or image-based approach rather than looking at the code. And I guess there are multiple benefits. The first one I think we talked about, I'd like to get your opinion on is we go into this discussion is if you're using certain tools and you're going at to code level where you're writing loops and are going to try to interact with an application, at what point do you start testing the actual applications of user and you start just testing the code? I thought maybe we start off that as a jumping point. Did I understand that correctly, that conversation that we had?

[00:08:20] Tobias Müller Yeah, that's the point, actually. I mean, there are multiple benefits of doing it visually. But perhaps we need to clarify the choice like visual testing. Everybody knows visual testing, but visual testing in the end is just a screenshot comparison. So you compare what you had in the past to the current one and you either accept the changes or you don't. And based on that, you can actually then find defects against the software. That is not what people are looking for in those scenarios, because what they want to have is like a unified approach, unified access to all of the different technologies that are used on the analyzer. And what we are using is like we're grabbing the screen and also interacting based on visuals and that's a bit different. So it's functional and visual testing all combined in one, but you can access all of the different technologies the same way and that solves the problem for them. And but nevertheless, what is also the problem is in most of those test automation tools is that's interesting is like you do a lot of automation like enter text, do this, compare here and press the button, but you never check the individual interactions. So at some point in time, the test case fail it says for 22 for example, and says like, okay, I couldn't enter the text or the result was wrong, but the problem happened already in step number two. So what we also changed for those regulated environments is what we actually do as a human is like they verify every single interaction in the system and that is also what makes a difference in those markets actually.

[00:09:35] Joe Colantonio So image-based automation is not new. So I just want to get why this is slightly different. I'm not going to name vendors, but I know there's an open-source tool called Sikuli that could do this. But it was kind of flaky because if a pixel was off, it just made it really hard to make reliable. And like I said, there are other solutions that are more commercial-based. So how did you get around that? Did you have a different type of approach to get around those types of issues that you may have seen?

[00:09:59] Tobias Müller Yeah. I have worked around that by using convolution. Most of the people know that from convolutional neural networks like that artificial intelligence stuff, we use something similar to that actually to make sure that we can interact with the elements and also find the elements on the screen, but they can be different. So it's more like customized feature vectors where you can still be able to identify an element and tell you like this is the element with the following probability, and that is then good enough for testing because we can say like there needs to be 100% probability. And that is what we did with you also with those tests is like I said, like, no, we want to identify is this side shown in the pop up? or is the side shown on the full screen? And you can also say like, no, I'm accepting changes like that. And that is where we act a bit differently because if you do a pixel-by-pixel comparison, the chance is extremely high that it will only run once or twice. So you're running on a different PC and it will not work anymore.

[00:10:52] Joe Colantonio Yeah. And you actually show me a demo where you have a field that has a text above it, and so you're identifying it based on the text. But it doesn't matter where that used some sort of algorithm where even if the screen gets shorter and that text may move, it's still able to find the field. Is that the type of technology you just explained?

[00:11:09] Tobias Müller Yeah, also that's the technology on top of that is actually to identify that because most of the image-based approach is actually and you see that in those demonstration images most of the time. If you look at it closely, if you know how it works, it's like most of the time they select the label and the text field and say like, I have an image now and now I can enter some text. That is true. But actually, if you resize the web page or resize the application, if it's responsive, the field will be somewhere else after the resize operation. And that is why we use actually special algorithms to identify the relation of the text field to the label. And the only thing that we need to know is actually your reading direction. So if we know that you're reading from the left up to the bottom right, you can actually identify the text you that you as a human would associate to the specific label, and that is how it works underneath.

[00:11:51] Joe Colantonio So I know a lot of companies, but where I used to struggle with this, they had to build a version that was U.S-based. So we had U.S Text and then they had to do a localization that it ran in different countries with different text. How would you be able to get over that using this type of approach of using like a visual?

[00:12:06] Tobias Müller Yeah, it's for stuff like that. So for your text-based actually using OCR, but it's not a typical kind of OCR because you know that from your history as well, if you just use plain OCR and most of the time you are looking for single words or double words and the failure rate is 99.9, that the corrective rate is 99.999%. But that means on a 1000 letters on the A4 paper, which is black text on white background and four screens. Usually, if you do OCR on those small single-line elements, you get the wrong characters like the typical O is replaced by a zero and small l replaced by 1 and stuff like that. So we do a reverse OCR that's another, another algorithm invented by us actually that you know in advance what you are looking for. So if I know that I'm looking for the label and I get back from the OCR like one ..., then I know exactly that is actually the label based on some distance calculations that we do, and that goes back to the spatial computing that we use for the image detection as well. So what we do is like we always use the test-first approach. It's like we don't have an option. It's like, give me the text and then I need to find the information. Somewhere in this text it's like, Hey, you're looking for this text to tell us what you're looking for. And we make sure if that text is on the screen and we give you the context of that position on the screen.

[00:13:20] Joe Colantonio Nice. So you also have something, I guess it's like a model-based type of approach that's kind of unique. Like over time, you have all these screenshots already in your history. You may not have tested a specific flow, but you already have the screenshots in order to create that test case. Can you talk a little bit about that approach? I believe it's like a model.

[00:13:36] Tobias Müller Sure. The idea behind that is actually that you don't need to change all of. So the typical approach, you have test scripts and as I'm not allowed to name any vendor, I guess at some point in time in the past somebody came up with a clever idea to say like, Hey, having individual test scripts doesn't make much sense because if you need to change something because the application changed, we need to change all of the different test scripts. So we remove the interaction logic. And so how do I interact with the application into a model? And I just have my scripts working against that model. And that was like 20 years, 30 years ago, I don't know. And we use the exact same approach. The only benefit that we have is actually via capturing screenshots all the time and you tell us which areas are of interest for the user. So based on that information, what we generate in the background is the visual twin. So that means you have a model that you can use actually to access the application in a normal fashion, or you can actually define a test case based on a visual twin. That means like in the web browser, you have your full application available even without having the application available. So you're working on screenshots and interactive elements of your screenshots, and you just write the test case based on going through your application as if it would be the real application. And that is how we brought model-based testing to the next level.

[00:14:43] Joe Colantonio Just something just came to my mind which is interesting, if you work like in a regulated environment, sometimes, again, these environments are up and running difficult. So if you're onboarding someone new, they may not know how to get the application up and running on the machine or anything. But it sounds like if they use this approach, could they get started again in the few understand the application because it's working on the model. Does that make sense or the image?

[00:15:03] Tobias Müller Exactly. Yeah, that's the point. Because for example, for the blood-typing robot that I just mentioned before and you need to put in some blood samples, you need to put in reagents, you need to put in additional consumables, and stuff like that. So till you know how to start it, it takes roughly 2 to 3 weeks, actually, and then you're able to start all of that stuff and have everything set up and consult the software. And with that model you just start instantly because you don't need to take care of that.

[00:15:24] Joe Colantonio So as I mentioned in the intro, I'm actually checking out this solution I've been messing around with there for a few weeks. I'm going to be dropping a review this week on it. What's interesting as well as you have two approaches which I like, you're not like a dictator where you have to do it this way. So you have like a code-based approach where if someone's familiar with Visual Studio, they can start coding up these images, but not without a lot of code. It's just it's like a code-based approach, I guess, and I'll get your opinion on explaining it better than I am. And then you also have like a no-code type of approach. So can you explain both approaches and how maybe your code approach is different than what people may think when they hear a code-based approach?

[00:15:58] Tobias Müller Yeah, that's a good point, actually, Joe. The thing is what we decided, we definitely decide for a low code approach because I don't believe in no code because I've seen a lot of different tools. And usually, if you have no code either you don't have all the capabilities that you need and somebody has some box where it says like you can put custom code in this box. And what ends up in a year of test automation is like most of the testing is done in this little box in there because you can do everything in that little box and that actually breaks the whole no code idea which gives you maintainability and stuff like that because nobody can do something specific. And I haven't seen a tool yet that doesn't have this. You can put any kind of code in this little box. Which is then misused. And on the other side, like take Selenium, that's also an open-source tool. I mean, it takes some programming knowledge to actually be able to use Selenium in a fashion that is actually reliable. Yeah, for sure. You can get your first elements within minutes. That's no problem. But to make it reliable, you actually need to understand loops and wires and stuff like that and also different coding techniques. And you can also see that actually testers giving introductions to how to code so they have coding courses. Meanwhile, the profession is actually testing. That is brilliant, but you see that there's a lot of additional effort required if you do have a full coding environment. So what we decided is on like we give you the full coding environment, but if you only give your low code access initially. So we do have something that's called codified expertise. That is where all our expertise was put into code. So if you think about visual testing, it's like if you open the dropdown, it might drop down to the bottom, it might drop down to the top, it might also drop out from the middle. And all of that is just a visual representation of it. So you need to understand, like, what is the difference to the previous screen? How do I actually detect that drop-down? How do I handle it? How do I scroll within that? Because you only have pixels, so there is no additional meta information on it. And that is just what we didn't for like the last 15 years. And that is why we came up with this codified expertise. So you can easily use it like for scrolling, you don't need to create all, it just scrolls out of the box. I think that's something that you're seen in the demonstration as well.

[00:17:52] Joe Colantonio Absolutely. What I kind of like about this approach is it goes back to the basics of testing. I think testers, a tester if they want to test the application, they don't necessarily want to be a developer. It's two different mindsets, but it's almost one out of vogue. And then it's like, well, testers are developers too. So they need to learn about all these crazy developing concepts. I guess it's good, but it's almost like almost lost sight of testing, right? So this kind of unifies it now back to where it was.

[00:18:17] Tobias Müller Yeah, that's another brilliant point, actually, because what we want to do is actually that is actually my mission is like I want to combine development knowhow and testing knowhow. So we do have developers and we do have testers. I mean, no, in today's world everyone should be able to do everything, but it doesn't work out. But what I want to be able to is like the developer can add additional functionality because they can code and the testers can afterward use them in the model. So the testers have a low code approach. They create all of their dedicated test cases against that model provided by the developers. And then the magic appears because you can now use those tests that were written by the testers and they actually put that in your CI/CD pipeline because it's actually part of a development tools stack and that is now you really see the benefits of having testers in there, doing all of the tests, writing the test automation and then bringing it back to the pipeline. For every iteration now that has to be executed and there's one of the typical problems that you see for testers like, Hey, I tested all of that stuff, but nobody cares. It's like if I put something right on the dashboard, everybody cares. But if I don't put something red on it or not enough red, then nobody takes care of what I'm actually doing, of seeing the benefits of what I'm doing. And if I can bring back my stuff to the developers and they use that as testers and like they use the unit test today like I have fully automated end-to-end tests or like the full system is tested based on the test of the testers during the CI/CD pipelines. That is actually what our goal is because that saves a lot of time in the end.

[00:19:41] Joe Colantonio So I know another thing your tool help address this is, not only do we need to be a developer, they almost need to be like an infrastructure expert nowadays and know about AWS and configure it to run in parallel, all this crazy stuff. So I believe you have kind of a workaround where you're able to scale up a lot of VMs automatically for folks. So maybe talk a little bit about how you achieve that or what that technology is like.

[00:20:01] Tobias Müller Yes, that is actually what we needed also for one of our customers, in the beginning, is like we need to be able to scale because the problem is like if you do have automated tests, you also want to run them. That is the interesting part because most of the test automation tools actually trick you in. You have a few design licenses and afterward, you need to buy a lot of runtime licenses because otherwise, you cannot run them overnight. And that's the interesting problem. But we said it's like, hey, why don't we just make it simple and say like, hey, you say, I want to run that in the cloud and you just run all of your tests automatically simultaneously in the cloud. And if you just bring those VMs up, run the tests and bring those VMs down for you. And that saves a lot of time and money, to be honest. And you don't need to have that kind of device form anymore because it's just spun up if you requested and gets down afterward, that is actually not how cloud providers want us to use their cloud. That is also something we have very already had some discussions on it, but that is brilliant for testing.

[00:20:52] Joe Colantonio So another thing in the regulatory environment that I saw a gap and I'm not picking on open source tools, it just takes a lot of effort to get it right. Is traceability and dashboard, not a dashboard, but having like a portal where someone can go in and know what test map to what, and if an auditor came in, they can give them just to the portal. So talk a little bit about the portal feature and how it helps, maybe especially testers in regulated markets.

[00:21:16] Tobias Müller Yeah, that's also a point is the traceability in that. That's true. Usually, you do have some and they all work honestly. All of those tools like Selenium, and Appium, all of those tools, they work brilliantly, and use them. You just don't need them anymore. If you use testresult.io. That's all. And the problem is all of them is like they generate log files or they upload the repo somewhere or you actually have to build around stuff. And that's the problem is like in regulated environments, if you build stuff around, you need to validate that stuff around. So you need to think about like a complete system that needs to be validated. And that's the point. If you want to have some traceability, which means like you can trace from the requirement to the specification over the software detail design down to the test case. Then you have different test reports which means you have executed test cases and all of those executions then can still be traced via the tracing chain to the requirement actually on the top. And you can imagine that you need to create like trace tables in regulated environments where you actually trace from the top to the bottom and from the bottom to the top in both directions, to see that there's nothing missing in all of those changes. And that is actually what we address with all of the traceability in the portal. So if you put the test case into the portal is automatically version lasts. So you have a version on it. If you change it afterward, you get a new version. If you execute, you get a new version of the execution and you link all of that to the environment in which it was executed and the software version that it was executed against. And that means you have full traceability of your test case without adding any additional stuff to that. So that is like part of the central core offered that is already validated.

[00:22:46] Joe Colantonio Once again, regulated environment. We had to save our environment, freeze it in the code, in the tests that we ran on for the validation phase. So a verification phase, if anyone came in to do an audit, they want see the exact environment and it was a nightmare to get together. So it sounds like this helps overcome that type of issue.

[00:23:04] Tobias Müller Yeah, that's the point of what actually uses VHTs and ISOs and you just archive them like you would usually do. And if it comes back to an audit, you actually just can spin them up and that's it. The software knows exactly which version of the eyes of VHT file was used. It knows exactly which cloud provider it was instantiated. It knows exactly which test case a revision was used, which the software version and also the software binaries are also in the portal so we can regenerate the state that was present during the initial execution at any point in time.

[00:23:31] Joe Colantonio All right. So I just think it's important once again to stress that with this approach, the main benefit would be you read it once and if you could actually literally run it on a Windows machine, a Unix machine, a browser, or a web app, and I understand that correctly. Like, what was like the main selling point? You run it once and then you can just run against all these other types of environments and OSes and things like that.

[00:23:54] Tobias Müller You can do that, but usually, everybody claims that in most of the time it doesn't work. So what we actually claim is like, based on what you mentioned before, and it's like this model-driven approach, so you generate models for the different environments where you want to run the test case on, and then you can just run, then you actually can run the same test case against those different environments. And that is true because you are actually interested in the test case and not in how to interact with the software on a technical level.

[00:24:17] Joe Colantonio Absolutely. Great. Thanks, Tobias, before we go, is there one piece of actual advice you can give to someone to help them with their automation testing efforts? And what's the best way to find contact you or learn more about testresults.io?

[00:24:28] Tobias Müller So the one advice is actually before you automate, think about what needs to be automated, invest the most value in into where can you actually save the most time in automation just automating for the fact of automating that is fun but those are what developers do. You shouldn't do that. And actually, you can find me you go to www.testresults.io so or you write me an email tobias.muller@projectch or you actually connect on LinkedIn as Tobias.Muller.

[00:24:51] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguildcom.kinsta.cloud/a424 and while you're there make sure to click on the try it for free today link under the exclusive sponsor's section to learn all about SauceLab's awesome products and services. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:25:33] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Nicola Lindgren Vernon Richards TestGuild Automation Feature

The Software Tester’s Journey with Nicola Lindgren and Vernon Richards

Posted on 12/22/2024

About This Episode: Today, we dive deep into how to advance your career ...

Alex Kearns TestGuild DevOps Toolchain

Leveraging GenAI to Accelerate Cloud Migration with Alex Kearns

Posted on 12/18/2024

About this DevOps Toolchain Episode: Today, we're diving deep into how you can ...

Three people are pictured on a graphic titled "AI Secrets You Should Know." Set against a striking red background, the image features the ZAPTALK logo in the top left corner, highlighting discussions on AI and automation.

The Secret to Embracing AI and Automation (ZAPTALK EP 02)

Posted on 12/17/2024

About Episode Join Alex (ZAP) Chernyak, Joe Colantonio, and David Moses in episode ...