AI Meets Cucumber: A New Testing Approach Using Prompt Engineering with Guy Arieli and Tal Barmeir

By Test Guild
  • Share:
Join the Guild for FREE
Guy Arieli Tal Barmeir TestGuild Automation Feature

About This Episode:

Today's topic dives deep into the innovative world of AI in software testing with our distinguished guests, Guy Arieli and Tal Barmeir. With years of experience in the testing domain, they've pioneered a groundbreaking automation solution using AI to bolster the efficiency and effectiveness of testers around the globe.

We'll unpack the power of Blinqio, a cutting-edge SaaS offering that revolutionizes the way testers work. Also, learn how Blinqio's virtual testers translate scenarios into test automation code, supporting a high-speed software release process across multiple languages and platforms. Guy and Tal will shed light on the remarkable capabilities of their AI system, which not only adapts to changes but also pinpoints and fixes minor issues, saving testers from tedious tasks and last-minute hassles.

Listen in to discover how generative AI can generate tests for API and UI, enhance documentation, and even test as humans would – pushing the boundaries of automated testing. We'll delve into the intricate process, from test data generation to the AI's unique ability to operate with or without prior product knowledge, catering to diverse testing expertise levels.

Check it out for yourself now: https://links.testguild.com/blinq

About Guy Arieli

Guy Arieli

Guy co-founded and CTO of Experitest (acquired by NASDAQ:TPG) and founded Aqua that was acquired by Matrix (TLV:MTRX).

Prior to that Guy held leadership technology roles in startups & large traded companies including Atrica, Cisco, 3Com, HP – test automation engineer/lead.

Guy holds a BsC Engineering – Technion, Israel Institute of Technology and recently completed Machine learning courses in Tel Aviv University.

Connect with Guy Arieli

About Tal Barmeir

Tal Barmeir

Tal co-founded and CEO of Experitest (acquired by NASDAQ:TPG) a SaaS B2B Software DevOps company. Prior to that Tal held various leadership roles inc Accenture (London) NYSE:ACN, Comverse (Israel) Head of Marketing in the Services Division Hi Tech Strategy Manager and more. In additio, Tal served as Lieutenant in the IDF (Israeli Military) and was selected to serve in peace negotiations.

Tal holds a MBA INSEAD Fontainebleau (France), MA Economics (Summa Cum Laude), Tel Aviv University, LLM Law (Magna Cum Laude), Tel Aviv University. Tal participated in the Interdisciplinary Program for Outstanding Students, Tel Aviv University (top 20 students in Tel Aviv University)

 

Connect with Tal Barmeir

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:25] Joe Colantonio Hey, it's Joe. Welcome to another episode of the Test Guild Automation podcast. Really excited about today's episode. Today, we'll be talking with Guy and Tal all about AI meets Cucumber, a new testing approach using Prompt engineering. And a really cool, I think, innovation in automation that we haven't seen in years. If you don't know Guy and Tal, are seasoned serial entrepreneurs in the testing domain, boosting an impressive 25 years of experience. They've done it all. Their previous venture Experitest, now Digital AI was on the forefront of providing test automation solutions for mobile apps during the early stages of smartphones, and rather than just sit around with a pile of money, they decided to come back in the game with another cool solution I think you need to know about. And it's leveraging AI in a way I think is going to be really powerful that you needed to know about. So it's really going to be this episode is going to be primarily focused on harnessing the power of AI to empower testers to really accelerate their automation efforts using a unique approach that you don't want to miss. You want to stay all the way to the end to hear this. Check it out.

[00:01:24] Joe Colantonio Hey, Guy and Tal, welcome to The Guild.

[00:01:29] Guy Arieli Hi.

[00:01:31] Tal Barmeir Hi, Joe.

[00:01:33] Joe Colantonio Great to have you. I guess before we get into just a little background, I just touched on a little bit. You are entrepreneurs. You are experts in this domain. So why come back with another solution in the testing domain where you could probably focus on anything nowadays?

[00:01:49] Tal Barmeir So I think first of all, we really like the testing domain. We think it's a critical part in any software product and sometimes overlooked. So we're actually very enthusiastic about it. And I think it was really Guy's idea to try and see how artificial intelligence specifically generative artificial intelligence, can actually benefit the testing domain. And he came up with this really cool idea and we just couldn't sit back and rest.

[00:02:18] Joe Colantonio Love it. So Guy, talking about that, what made you think as you were probably experimenting? Oh, wait a minute. I probably can help solve an issue I know testers really struggling with this approach.

[00:02:30] Guy Arieli Yes. So I had an opportunity for two years to invest in artificial intelligence. So I was part of the Tel Aviv University and took any course possible in machine learning and natural language processing. And in a certain point of time, I understood that it can be leveraged to not to generate text, but to make a decision that a day-to-day tester are making. And we can empower those testers with the AI by enabling the AI to make those decisions.

[00:03:10] Joe Colantonio Love it. So obviously that's been a hot topic. Every new tool that's coming out here has AI attached to it. But as you said, you've been working on this before. I think it was, kind of more and more mainstream. Where do you see the legitimate pros of AI or the legitimate applications of AI that aren't just a lot of marketing speak we've been seeing nowadays?

[00:03:32] Tal Barmeir I think at the end of the day, generative AI in a way creates a synthetic human brain which you can leverage in various directions. And we really look to how we can leverage that to the benefit of the testers of the testing managers. And we found that you can really significantly increase the productivity of testing by enabling testers to basically have an army of virtual testers underneath them that are able to work for them, create huge productivity and a lot of value, and also make their life much better because those AI virtual testers can basically work for you during the night. They speak any language, and they enable you to meet your deadlines and speed up the testing process in a way that you couldn't do before. We think first of all, it's not just AI speak, it's actual stuff working. And we invite anyone to see some of the demos and webinars where we actually show it in action. And also the value is huge because it's not just improving certain aspects of testing, it's actually creating a whole load of testers working for you, enabling you to become much more productive and meaningful in the organization.

[00:05:03] Joe Colantonio You mentioned a good thing, Tal. You said, have an AI work for you, not replace you. So I think I really think, I think we're going through a transition period. And a lot of times I see testers on LinkedIn saying, oh, AIs don't work. They're just ignoring. And I think, I saw this 20 years ago with them ignoring automation to help them. I think they're going to be in for a shock. I don't know if I'm too pro, but how do you see the importance of AI or this transition? Are there certain skills or testers that need to be ready for in order to see because I think it's rapidly getting better and better too with the technology behind the scenes.

[00:05:39] Guy Arieli Yes. So first of all, we should examine what is happening in the development world with tools like copilot. So it's obvious that it's boosting the work of developers dramatically now. And it also impacting the QA engineer's works because now the same amount of engineers are streaming more and more code into the QA. So the bottleneck around the QA work is growing due to that. But if we are focusing on how it impacts developer, you can see that it's added something like 20 to 30% to their productivity.

[00:06:23] Tal Barmeir And also I want to mention here one more thing. At the end of the day, QA today is never able actually to catch up with all the code that's shoved over to it from the engineering teams. This problem is going to be further increased as copilot and other development tools are providing even further code being generated even quicker than before. And all of that ends up at the door of the testing teams. It's not really that you're taking organizations that are able today to cope with the mission at hand and hurting them in any way. You're actually enabling them to cope with it by providing them this extra enhanced productivity opportunity presented by generative AI. And I think that unless the testing world will be able to actually grasp this opportunity of leveraging and harnessing generative AI for its own ability to cope with what's going on, it will find itself suffocated by the amounts of code that is going to be shown at its door front. And the old ways of working without any sort of accelerating ability will just not be able to cope with that.

[00:07:41] Joe Colantonio I love that point. So it's like it's rather than having needing less testers, it's in order to keep up with all the code that's being generated. You need more testers I think. That's a great point, Tal. Thank you. So along those lines then, there's some skills that people know already. And they probably might be confused because I mentioned something called Cucumber at the beginning of this. How does Cucumber play into where you see particularly your technology coming into play?

[00:08:08] Guy Arieli Miraculously Cucumber was developed specifically for AI. It's enabled. It's the ultimate prompt for decision AI making for performing some action. It is written in English, which, any language model understands. And if it is written on the business level, not a clicking on button, but saying, okay, I want to log in, I want to add opportunity. I want to verify that the opportunity got added. Then it has the potential to generate tests that will be able to overcome huge changes in the application. They will still be valid tests, they are valid for mobile. They're valid for different languages, and different screen sizes. We are leveraging these amazing way tools that was invented for product managers to be able or for non-coder to be able to play part in the automation world. And we let the AI engines understand their request and operate it on the user interface.

[00:09:27] Tal Barmeir I think Joe, I want to build on what Guy just said because I think it's a very strong point at the end of the day, Generative AI is as good as you speak to it in an accurate way. And similar to the ability to get from ChatGPT good responses, the question is how do you ask it? Same goes with using generative AI for testing. If you want it to test for you what you need, you need to describe it in an accurate, professional way. And that is exactly what Guy has been mentioning here that in a miraculous way, Cucumber is just that language that enables for a tester to communicate with generative AI and create tests using it. That's a very, very, strong point for testers to have that capability of creating test scenarios and feature files in an accurate test that speak language basically called Cucumber.

[00:10:29] Joe Colantonio All right. So that's a great point. So how do people do that then? I think the term is prompt engineering. How do people get better than that prompt engineering in order to generate good results? Because like you said it's only as the questions or the way you formulate. I guess the tests back up for Cucumber.

[00:10:46] Guy Arieli I think that you need to be balance. One type of prompt will be to test the user feature. Obviously, AI will not be able to really understand what you want to do. On the other end, you will say, okay click on this text field and send this text. Click on this text field, send this text, click on the login button, and so on and so on. And this is on the other side. It's too detailed to generate a good test. If you want to generate a good test using generative AI you need to balance between those two edges.

[00:11:29] Tal Barmeir Yeah. And I think what Guy saying is very critical in the understanding that at the end of the day, we speak on a business logic level to the AI and ask it to do things. We're not asking or talking to it in a technical, detailed level. And we're not selling storytelling high level. You need to know the relevant level of communication the way a tester would actually, or a product manager communicate into the testing teams. What is it that he requires, which today is Cucumber as the standard? And that is exactly what is well processed by generative AI.

[00:12:12] Joe Colantonio With this approach now with AI mixed into Cucumber, can you do it more high level then and will automatically be able to know make connections of what needs to be done, like say like given I'm a radiologist, a patient comes in and I check for a broken bone. Like could it then generate multiple test cases based on that one prompt? If I do the prompt correctly, or do I need to still make one prompt for every single test? Does that make sense?

[00:12:37] Guy Arieli So yeah, we have two different tools, two different engines. One is to get requirements and based on the requirement the existing test scenario, some documentation of the product and other guidance that you can upload. It will generate scenarios and feature files. This is on one end, on the other end, we have an engine that knows to take instructions like login and opportunity, create a repository, and translate it into actions like a human being will perform on the user interface and combining them together, you can take a feature definition and generate working test with the code in the end.

[00:13:32] Tal Barmeir At the end of the day, we're basically feeding into this generate AI machine test scenarios described in business logic. And at the other end, we're getting test automation code written at the very top end of it, as well as the ability to maintain that code autonomously without human intervention. So that's basically the overall capability that you can get today from generative AI, when there are really skilled testers that are creating those test scenarios to start with.

[00:14:05] Joe Colantonio Nice. So underneath the covers then what is happening? Do people have to write the code to implement it, or is it already implemented automatically by you like a black box? Or what's the technology behind the Cucumber that's actually driving the automation of the interface?

[00:14:20] Guy Arieli So what will happen once you ask the engine to generate a step definition for a scenario is that the browser will get open in the URL that you have provided, and we will request from our engine to perform the first step in that scenario. And the engine can say, okay, I don't know, I'm missing some data, I don't have the username or I don't have the password, I can see a login screen, but I don't have the data. This could be one response or it can say, okay, I want to fill this text field with the text. Now it will happen automatically. So on the screen, you will see that the user field is populated with the information you have provided. And in some point of time, it's the engine will say, okay, I'm done with the login, I'm already logged in and let's go for the next step. During that process, it will generate the code that next time when you want to execute the login step, it will run without any AI, so it will be a standard JavaScript Playwright or Selenium code that performs those actions with a multiple locator for each element. And it generates a state of the art code that will take any human being a lot of time to generate.

[00:15:52] Tal Barmeir Just to give you an idea, when we actually throw a test scenario into our generative AI Blinq.io machine, it would crunch out code, which would take a human firsthand to create a day or two within 7 minutes. And then that code is state of the art test automation code. So think about how productivity is increased. And every test engineer suddenly becomes a very significant part in this machine, because he is the one that enables it to basically generate this high-end code.

[00:16:30] Tal Barmeir Great. Going back to our example, Tal, with the login uses the AI the first time to create it and then has a function that then goes back to. What happens if that changes? Do you have to maintain or is the AI smart enough to then use self-healing to say, all right, the element change, let's use something else to look at it?

[00:16:47] Guy Arieli What will happen is that it will be part of your pipeline and the test will fail. And then a recovery process will start. So it will identify that we have 10 tests that failed due to the login. It will try to recreate the login step again to regenerate the step definition. For the step that fails, once it's done, it will rerun those tests and it will generate a pull request of the change code into your git repository. So you will come in the morning and you will see that pull request, you will approve it. And that's all.

[00:17:28] Joe Colantonio And that's great for accountability. You have a log then. So if some reason so and so what happened? Well here it's automatically created for you. Which is better I guess because then you automatically have reports generated for you in the exact steps that occurred, especially if you've been audited like here it is. Don't have to generate anything on your own. So that's great.

[00:17:46] Guy Arieli Yeah. So we try to design the system so the human being will be an auditor in different locations where you would want to be an auditor. So sometimes, AI can make mistakes. And one thing you don't want is for those mistakes to enter unidentified. And then you are thinking that you are testing something. But in reality, it is not been tested. We have put some auditing point in every point where the AI generates something. So we provide the visual report with the screenshot of every element that was identified and the reasoning behind any decision. So you can review it, approve it, and then it will become part of your main branch.

[00:18:37] Tal Barmeir And I think that if you just think about it when you run tests and you have out of whatever 50,000 executions, you simply have 5000 that failed. And now you need to go into understanding why they failed. And a lot of them, it's stuff that insignificant. But it holds back all the releases. And now you can actually have that done miraculously in no time. And also those tests that failed can be rectified so that they can be rerun successfully. And not only did you speed up the release of the software, you also released some of the had a very mundane work of just going through each one of those failed executions, trying to understand what was the insignificant reason they actually failed and taking them out of that bucket. I think in terms of the quality of life of testers, it really puts them into the driver's seat, doing the more important analysis and strategic testing work rather than the repetitive stuff, which is typically also always last minute and in high pressure.

[00:19:53] Guy Arieli And I think that one important point, the recovery is not just in the locator of element, it's the entire juristic of how to perform. Sometimes the menu will move to a different place, or you have additional steps you need to perform on the way, so it will know how to overcome those changes as well. It's not just as long as the business logic stays the same, the test knows. The system knows how to maintain the test.

[00:20:25] Joe Colantonio Tal, you mentioned maintenance. It's not trivial. I mean even 2 to 3% failure rate for a tester if you have thousands of automated tests, takes forever to figure out what root causes. And by that time you have another code check that kicks off another test suite and then it's too late. I mean, that's why a really good power of AI. But I'm also wondering what else can be done. You keep mentioning this army of AI to help testers. Like, what other things am I missing that could assist testers? What that you've been seeing or you've seen it. You revisioning.

[00:20:53] Tal Barmeir First of all the AI can help the tester to test in any language. So it supports testing of global websites in German, French, Korean, English, you name it, Finnish, any language. So suddenly any tester is multilingual. So it empowers the testers to be multilingual testers. And that is a huge thing, definitely in a global environment. Another thing is that it enables you with the same business logic to test regardless of the platform. So you can test on an iOS device, on a website, on an Android device, on a desktop application. As long as the logic, the business logic of the flow is the same, it would be fully sustained and done by the AI virtual tester for you. Now somebody that only knew to create test automation scripts, say for a web environment can suddenly do it for an iOS or for an Android or for a tablet or for a desktop environment. This really is a multiplier force for any tester that can suddenly become multi-platform, multilingual, and all of that overnight. So I think it's super, super powerful.

[00:22:14] Joe Colantonio Once again, being a software engineer for many, many years, localization testing was a pain. We'd have to get Excel sheets for our UI and then sorted out all these experts and then get approval and come back and then compare it against, what the developers did. That's an awesome use. So how does this work done with devices and am I wrong? This less code, less maintenance on top of it. So it sounds like you have one script that then can handle all the devices, all the different scenarios, rather than, oh, I need a special one for Android or a different one for iOS. And you have all this code base now that's just sprawling all over the place. Is it legitimately just having one test case and then passing like running gets Android like, how does that work?

[00:22:57] Guy Arieli We have different step definitions. Let's say you have this login example. Then you will have different step definitions for Android, for web, and for iOS. And sometimes there will be different step definitions for different breakdowns in screen sizes. So if you can say okay I want to log in but the login is different, the menu collapse differently. So you can just say I want to log in and it will generate different steps definition, depend on the platform screen size and so on.

[00:23:36] Tal Barmeir Yeah. I think Joe, the best way to describe this in a way is, think about the AI as the brains of the tester, and you provided eyes and you provided hands and it knows its way around. It doesn't care if it's now so-called looking at an iPhone or a Samsung Android phone or a website. It just follows the logic that you're telling it with its general intelligence, exactly like a human person would do. It doesn't really care what's the platform. It doesn't care which language because it speaks in all languages. It just follows, the business logic, regardless of the platform.

[00:24:16] Joe Colantonio Awesome. Does it help anything with test data? That's another concern. People like, yeah, I have this AI. I could do all these scenarios, and I have to come up with even more test data, which is so difficult to begin with.

[00:24:28] Guy Arieli We tuned our engine not to one of the problems we had in the early stages of building that engine. Is that okay? I could tell him, please login. And let's say I didn't provide any username and password. It will try and invent different username passwords for failing over and over. We worked very hard so our engine will not generate it. So it tuned to say I don't know I need an assistant if the data is not provided for him.

[00:25:06] Joe Colantonio We talked mostly about functional automation. Do you see any application of this for like other types of automation that may be done in the software lifecycle, but not necessarily be a test because it's AI, I assume, can you run any type of task that is software related?

[00:25:22] Guy Arieli First of all, API testing is the next on our list. It makes, a lot of sense to use some API documentation and based on that, generate a specific API test or combination of API and UI tests. So this is something that we already experiment with and we are getting a very nice result.

[00:25:51] Tal Barmeir At the end of the day, we think that generative AI can really help the last mile of testing, and functional API performance load. Because if you think about it, the more code is generated by Copilot and other stuff, it doesn't really see a human eye before it's shoved over to the end user. And the only gate in the middle is that functional testing or last mile testing. And that's where we see most of the value. The fact that you can actually imitate the way a person would actually experience using the software product that was ultimately created and able to actually be tested the way humans test a product. We were actually very much focused on that last mile, but it also includes the API performance and load.

[00:26:41] Guy Arieli One more point is that in many cases when the AI is failing, it is due to bad user experience, and we can get the confidence of the model about the decision and know whether this UI element is right from a user experience perspective. So if it is an easy decision for the AI, then it's usually due to the fact that there is a good user experience under in this flow.

[00:27:17] Tal Barmeir Think about it at the end of the day, we can measure how much effort AI did in order to execute the business logic it was requested to. And based on that understand if that is an intuitive user experience or something that would also take a person, putting a lot of effort to execute.

[00:27:41] Guy Arieli One more point around that. Sorry, Tal. When we execute the engine, it runs in two modes. One is a user that is first seeing the product, so it doesn't have any prior knowledge on this specific product except from the generic text available in the internet. Other option is to upload your entire documentation guidance into our engine. And now you have a very, very experienced test engineers that knows everything around the product, how to do everything. So you can decide to use someone you know with fresh eyes that working on the user interface or someone that is very experienced and getting in, read all your documentation, and so on.

[00:28:37] Tal Barmeir So think about it. If you have a professional type of application where you actually need to know all the ins and outs in order to get everywhere and test it, then you can actually educate your virtual tester to know that, and if not, if it's just something that should be easy, a B2C application, you can actually, let him be sort of an ignorant tester and you can play around with configuring his level of expertise in the application being tested.

[00:29:07] Joe Colantonio I have all kinds of scenarios playing out in my head now. I was just thinking once again. I used to work for a health care company, and we just created our software for radiologists, and they come up with all these fancy UI, UX and work on it for months and months. Beautiful. And then you get in front of the radiologists, the like, it's unusable. It's almost like you could use this as a confidence score ahead of time-shifted left to say, wait a minute, I want to redesign this because it seems like it's going to be more complex than what you're thinking. Great use case, by the way. I know you two always take it two steps ahead of most people, obviously, because you're visionaries, you're always creating new stuff. I know multimodal AI is coming out, and I think that's going to be even crazier with AI. Have you heard anything about you know Gemini is one example where you can then leverage visual things, audio. So not only the generative aspect of AI with the language, but now you're incorporating all these other layers that I think it's gonna get nuts, but I don't know. Any thoughts on the future of AI or on your roadmap and 1 to 2 years?

[00:30:05] Guy Arieli So obviously, combining images into the input of the engine with the text is something that is a potential to boost the capability of any engine. And it either it can be done for two reasons. Fine. One is for the engine to better understand what is happening on the user interface. And the second is to let the engine find a problem on the visual layer of the application.

[00:30:39] Tal Barmeir I think that, at the end of the day, the sky's the limit with generative AI going into software development and testing and all this DevOps world, there are many other directions. For example, you could take generative AI to help you maintain and test your documentation. It can generate test scenarios. And so there are all sorts of directions you could actually take. It's creativity and as well as its discipline. So it's this combination that it's able both to be creative but also do whatever it's told, which is something that's quite difficult to find in actual human beings. And that's super powerful. And we believe that's going to have a lot of impact in the software development cycle.

[00:31:26] Joe Colantonio Absolutely. So I'm just looking through my notes. I don't know if I mentioned Blinq.io. What is Blinq.io? And we were talking about all this functionality. All these are benefits of AI. But you've actually created a newer tool that actually does all this. So maybe a quick plug for what that is Blinq.io.

[00:31:44] Tal Barmeir Blinq.io basically creates virtual testers that are able to receive a description, a test scenario, and Cucumber and translate that basically into test automation code, which is state of the art as well as maintain it. And that is a SaaS product. It's available for planned and by in our website. And it's highly productive in terms of being injected into any testing organization, enabling it to release software in very high speed. So it cuts the testing cycle almost to zero time. It enables higher productivity of the engineering team because there is no contact switch between the time somebody programs and the time you receive the feedback. It's multilingual. It supports all platforms and it's really cool.

[00:32:38] Joe Colantonio Awesome. Okay, Guy and Tal, before we go, is there one piece of actionable advice you can give to someone to help them with their AI automation testing efforts? And what's the best way to find contact you or learn more about Blinq.io? Let's start with new Guy this time and we'll end with Tal.

[00:32:53] Guy Arieli The only way to get better in that is to work extensively with it because it's something that there are no specific rules, there are, but you need to practice them in order to really understand the capabilities and what will work, what will not work, how to approach different problems. This is my main recommendation to get your hands dirty around.

[00:33:22] Tal Barmeir Generative AI in a way it's very similar to the emergence of the digital world 20, 30 years back, the ability to be digital capable is similar today with the requirement or the need to heavily interact and become AI capable. And that's very much through experience, understanding how it works, how it reacts in being able then to work with it.

[00:33:53] Joe Colantonio Love it. And so if people to try it for themselves for Blinq.io, do you have a free trial or something that people can get their hands dirty with to see how this actually does what you say it does? And like I said, you've all been in your industry experts, so you've been creating solutions for years and years. So I know it does what it does. But if someone wants to try it, how do they go about doing it?

[00:34:13] Tal Barmeir Yeah. Just come over to Blinq.io. There is a, try now button hit on it. And a quick online registration and you're ready to go. There is detailed documentation we have every Tuesday, a webinar online which you can join for free, see demos as well as get sort of hands-on with the product.

[00:34:35] Joe Colantonio Awesome. We'll have links to all these in the comments down below for this video or audio podcast, wherever you're listening to. All right. Awesome. Thank you, Guy and Tal, you're awesome. I really appreciate you.

[00:34:46] Guy Arieli Thank you.

[00:34:47] Tal Barmeir Thank you, Joe.

[00:34:48] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a485. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:35:24] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

SimpleQA, Playwright in DevOps, Testing too big? TGNS140

Posted on 11/04/2024

About This Episode: Are your tests too big? How can you use AI-powered ...

Mudit Singh TestGuild Automation Feature

AI as Your Testing Assistant with Mudit Singh

Posted on 11/03/2024

About This Episode: In this episode, we explore the future of automation, where ...

Eli Farhood TestGuild DevOps Toolchain

The Emerging Threats of AI with Eli Farhood

Posted on 10/30/2024

About this DevOps Toolchain Episode: Today, you're in for a treat with Eli ...