AI For Playwright Tests with Todd McNeal

By Test Guild
  • Share:
Join the Guild for FREE
Todd McNeal TestGuild Automation Feature

About This Episode:

In this episode, host Joe Colantonio sits down with Todd McNeal, co-founder of Reflect, to delve into the world of supercharging Playwright tests with AI and other automation innovations. Todd shares an AI library for Playwright called Zero Step, which aims to make test automation easy to create and maintain. Todd discusses Reflect's AI-enabled features, the impact of AI on testing, and the importance of AI as a productivity tool for testers. Tune in for valuable industry insights and advice on integrating AI into automation testing.

Try it for yourself now: https://links.testguild.com/reflectai

Exclusive Sponsor

Discover TestGuild – a vibrant community of over 34,000 of the world's most innovative and dedicated Automation testers. This dynamic collective is at the forefront of the industry, curating and sharing the most effective tools, cutting-edge software, profound knowledge, and unparalleled services specifically for test automation.

We believe in collaboration and value the power of collective knowledge. If you're as passionate about automation testing as we are and have a solution, tool, or service that can enhance the skills of our members or address a critical problem, we want to hear from you.

Take the first step towards transforming your and our community's future. Check out our done-for-you services awareness and lead generation demand packages, and let's explore the awesome possibilities together.

About Todd McNeal

Todd McNeal

Todd is the co-founder of Reflect, a test automation tool that can execute manual test cases using AI. Todd and the Reflect team have also recently launched a new AI library for Playwright called ZeroStep. Todd is passionate about making test automation easy to create and maintain.

Connect with Todd McNeal

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:25] Hey, it's Joe, And welcome to another episode of The Test Guild Automation Podcast. Today, we will be talking with Todd McNeal, all about Supercharging your Playwright Test with A.I and all other A.I. Innovations in Automation. You don't want to miss this. If you missed out on the last show, we did with Todd. We got a lot of great insight around it. Todd is the co-founder of Reflect, which is a test automation tool that can execute manual test cases using AI. We actually did a webinar on this as well that I'll have a link for in the show notes that you definitely need to check out after you listen to this. Todd And the Reflect team have also recently launched a really exciting new A.I. library for Playwright called Zero Step that I've covered on my News Show, and I was really excited to have Todd join us to share all about this and other innovations he's been working on over the past year that I think you really can be excited about. Todd is really passionate about making test automation easy to create and maintain, and I am as well. And I think you definitely want to stick around all the way to the end to see how you can really supercharge your test with AI. You don't want to miss it. Check it out.

[00:01:28] This episode of the Test Guild Automation Podcast is sponsored by the Test Guild. Test Guild offers amazing partnership plans that cater to your brand awareness, lead generation, and thought leadership goals to get your products and services in front of your ideal target audience. Our satisfied clients rave about the results they've seen from partnering with us from boosted event attendance to impressive ROI. Visit our website and let's talk about how Test Guild could take your brand to the next level. Head on over to TestGuild.info and let's talk.

[00:02:04] Joe Colantonio Hey, Todd. Welcome back to The Guild.

[00:02:08] Todd McNeal Thanks, Joe. Thanks for having me.

[00:02:10] Joe Colantonio It's been crazy the past year for you, I think. Your webinar on Generative AI, This year, I think was the best-attended and most-registered for webinar for the Test Guild. Really excited to have you join us. Why do you think? I think I know why, But why do you think GenAI has been such a hot topic, and why your webinar in particular really seem to really resonate with a lot of people?

[00:02:31] Todd McNeal Well, I'm excited to hear that. I think it's just so new and so different than other technologies that I think just people are very curious about where it's going to go. It's evolving so quickly.

[00:02:43] Joe Colantonio Absolutely. So speaking about evolving so quickly, I know you were on the show in July, I think, but it looks like you've already advanced way past what you've done in July. So maybe talk a little bit about maybe for the people that missed the last episode, what is Reflect? And then we will jump into some AI approaches and testing.

[00:03:01] Todd McNeal Yeah. Reflect is a web-based tool for building automated end-to-end tests. And we launched about 3 or 4 years ago. We've been historically we're really focused on no code and low code automation. But once we saw ChatGPT launch, which was just about a year ago, we realized that this new AI technology would be something that would change a lot of how we build software and test software. For the past year we've really been focused on AI-enabled features. When I spoke to you back in July, we had just launched our AI feature set, which would allow you to basically write a prompt within Reflect and have that be translated into an action or assertion in your test. The way you can imagine that is if you wanted to click on the first entry in the table, you could just say click on first entry in the table and it would do that. And you don't need to conform to any sort of syntax. Since we've launched that, about two-thirds of our customers are now using that feature and even more of them are using that. We use that for self-healing as well. So even more of them are taking advantage of that. So we've really decided to invest a lot of our engineering efforts in that. And so we've been branching out with these new functionality that yeah, excited to talk more about today.

[00:04:25] Joe Colantonio So yeah, I love when you get something in the hands of your customer. I'm curious to know if you found the new like, Oh, I didn't know they were going to use it like that or what really worked the way I thought it would. And maybe it works differently, but better. Did you get any feedback around that?

[00:04:37] Todd McNeal We did, yes. So when we first launched it, we launched with the ability where the prompt would translate into a single action. So you would say click on the first search result or click on, put your username in the username field. What we realize is that customers want to do more with prompts, and basically what they want to do is what would appear in a manual test plan. So in a manual test plan, you don't just have things to say, Click on this, insert this. You have assertions, you have test data that's inserted there. You have things which are multiple steps like filling out a form like fill out the form with values that a user would enter. It doesn't really matter to me what they are, but it should look realistic. What we ended up doing is adding features iteratively as customers asked for that. So now the prompts can do things like start an entire form. We have examples around like scheduling something on Calendy, doing things in like very hard to automate ecosystems like Salesforce and SAP. So over the past six months, we've really been iterating to handle all those use cases.

[00:05:48] Joe Colantonio I love it. I think it's also important to know the reason why I open up to like let people know about Reflect is there are a lot of tools out there and everyone does things differently. So I'm just curious to know for the people that haven't tried it yet, what's maybe unique about your approach to AI and testing and what your company is focusing on that maybe others aren't or what your special sauces maybe?

[00:06:09] Todd McNeal Yeah. I think what customers like a lot about Reflect is that it's easy to use. So especially if you're a tester who hasn't done much automation or if you've tried other tools like this and found them to be a little unwieldy, a little too much to learn a little, not as user friendly as you would expect, users are usually happy with the user experience of Reflect. The other thing that we've really focused on is accuracy too. When you think about tasks, running tests, but also maintaining the tests, a lot of times what happens is you spend not only a lot of time creating it, but also maintaining it. Things are breaking all the time. Our philosophy is that ultimately the best testing tools, whether you're writing code or not, the tests are only breaking when there's a bug or when your requirements have changed. If other than that, it ideally it shouldn't break. And so that's the ideal that we've been chasing for the past couple of years and we're getting closer and closer to it. But yeah, the accuracy and the ease of use are usually the things people like the most.

[00:07:14] Joe Colantonio Another thing I think I like about your tool is, is a lot of times when you go with the tool they sometimes force you into like this is how you need to create something. And this is like a workflow that may be completely different than what your team is already used to. So I'm curious to know what you think of your AI solution or how you may feel, how it needs to plug into existing user's workflow.

[00:07:34] Todd McNeal Yeah.

[00:07:35] Joe Colantonio I don't know if that makes any sense. Any thoughts on that?

[00:07:38] Todd McNeal Yeah. I think if you're learning a new tool, obviously any change to your toolset when you're testing can have a big impact into your existing workflow. And so that means not only what you have to learn, but also does it work with your existing tools? Does it work with your test case management tool? Does it work with your bug reporting? How do I get reports out of it to report on defect rate or things of that nature? And so we think it's really important that we fit into your workflow and don't require you to really change these other things that connect into how test automation is done. One of the things that we released recently with Reflect is basically a Chrome extension that allows you to run Reflect test within your test case management tool. So kind of going back to this concept of if you're writing prompts of for test automation, those prompts actually end up looking like a manual test plan. With our Chrome extension, you can actually take your existing manual tests in your test case management tool and run it straight away as automation. And so that's an example of really fitting into your existing workflow where, if you're a functional tester, you're probably living in your test case management tool and or both. So that's where we fit into you there.

[00:08:55] Joe Colantonio All right. Because it's most of the idea that people are listening to even the talking head on YouTube. Can you explain a little how that works? How does it create an automated test out of a manual test?

[00:09:05] Todd McNeal Yeah. So the way that it works is you and your test case management tool, you may have hundreds or thousands or more test cases. And a subset of those test cases are things that you're running every time that you have a new release that you're testing. So that might be every week, every two weeks. If you're testing in Salesforce, you're probably testing the service pack, which is three times a year. So what this approach allows you to do is you would go into your test case tool. And if you're running a single test case or a set of test cases, our extension adds a button, which is basically just a run button, you click on that button and it takes your test case and basically treats each test step as a prompt. The first step is insert your username X at the step. Second step is insert your password Y. And the third step is click on the log in button. We'll execute those as individual steps, just like if you had read it, written it as a prompt inside Reflect. And what's cool about that is that the data is sent both ways. When you go into Reflect, if you start running that test and you say, Oh, well, actually my test case isn't as detailed enough to be able to run it as automation because it says like log in instead of enter your username, enter your password, whatever. It's very easy to change that in Reflect and then when you save it, it gets automatically updated in your test case management tool.

[00:10:29] Joe Colantonio Nice. This sounds really cool. I just think sometimes people get scared and we may have covered this in the last show that if they go with the tool and then they start integrating with test case management tool, if for some reason management goes, Oh, we can't afford the tool anymore, something happens that they can't run the test anymore and it's called vendor lock. I'm not applying. That's what vendors do on purpose. But what would happen in that case if someone went with something like Reflect?

[00:10:52] Todd McNeal Well, that's true. I mean, any tool that you're using there is some sort of there is some level of lock in. Like it's very rare in my experience for someone to use some sort of automation tool, whether it's code-based or not, and then be able to switch over to something else and just basically export it and switch it, even if you're using something like Selenium or Playwrigh. Most when people move to a different automation framework, they're going to rewrite it because there are different best practices and different patterns. It's usually a good opportunity to kind of clean up the things that you kind of accrued over time. For us, though, I think the technical, not the technical debt, but the vendor lock-in is less than other tools because at the end of the day, it's really just your test cases. And so when you're running tests from your test case management tool in Reflect the edits that you're doing to make everything work correctly is really just clarifying the test, making the tests more unambiguous, making it more specific in certain cases. And so that's just really kind of improving your test cases. You can kind of think of it as the AI works best if you're writing the test cases so that if a new team member came in and was reading that test case without much context of your application, they'd be able to perform it and know if it passed or failed. That's the AI. Because there's not like a lot of coding involved, there are a lot of specific things that you're doing and Reflect outside of just regular actions that would get sent back to the test case management tool. It does reduce lock-in a bit.

[00:12:24] Joe Colantonio I know another objection. I know I'm going to hear a lot of I do hear a lot of as especially a low code. I don't know why? Why would anyone want to say run a functional test case as an automated test rather than roll their own and write their own automation script? I know a lot of people sometimes are skeptical of black boxes.

[00:12:39] Todd McNeal Yeah.

[00:12:40] Joe Colantonio In AI, I guess they don't know what's going on, they can't tweak the code. Any thoughts on that?

[00:12:45] Todd McNeal Yeah, that's been a criticism of low code and no code for a long time. I mean, low code and low code even predates Selenium, as you know. So a lot of people have faced issues where there's a limitation with a tool. It doesn't let me get my job done. It would be easier if I could just code it. And I think there are a lot of legitimate concerns there. I think with AI, what's really exciting about it is it's next-level accuracy. The self-healing from five years ago, it improves upon that a lot. It improves upon some of the frameworks that have been around a while, like Gherkin, where you really have to be specific in your syntax and underneath the covers. It's just code anyway, with AI, if the AI is accurate enough, it kind of solves for all these problems that people have been trying to build solutions for, for years. And that's why it's so exciting. But yeah, I think people need to vet for themselves. Is the AI accurate enough? And we think it is and our customers are having a lot of success with it, but it's something I think teams would need to decide for themselves.

[00:13:50] Joe Colantonio And I don't mean to keep come back to us, but you did do a webinar and I know in the comments people are like, This looks unbelievable. And I heard after that some people actually tried it. I think, yeah, like a free trial people can check out and they said it worked just like it did in the webinar. So have you heard that from customers or have you seen do like how much of someone wants to go with an AI solution? How much of an overhead is it to get it actually working the way a vendor may claim it works?

[00:14:14] Todd McNeal It depends. I mean, I think it's a lot of it is similar to how you would think about a new automation moving to any automation framework like what's going to make that project more successful is if you've spent some time thinking about how do you manage data of the application, how do you manage it? So that application state is kind of controlled. It's not completely different every time you run the app. If your app UI is changing a lot, obviously that also makes the test automation harder and as part of an input of how much automation do you want to do is maybe smoke testing to start with the right call. I think though, we have seen a lot of success of people coming in and vetting it and I'm seeing it actually does work and we try to in all the demos that we do with customers and also with the zero step library that I think we'll talk about that later. We try to give practical examples like real examples. It's not against, a test automation site that's kind of simplified. It's real things like click on the first result in Google or like sign up for something like schedule or something in Calendy. Things of that nature.

[00:15:25] Joe Colantonio I would think as a vendor as well, it gets frustrating. How do you fight off maybe unrealistic expectations? I know as a tester a manager might hear AI tool. I would get rid of you all. We've got this tool. We're all set. Even as a vendor, you might think Oh, wait a minute, that's not what we're saying. So how do you fight unrealistic expectations then? And people here AI in testing automation, they think, Oh, I don't need anyone anymore. They just run this tool.

[00:15:48] Todd McNeal Yeah,I think there's a lot of I think there's a perception that we would hear that from people vetting us a lot. Like, Hey, I could get rid of testers. I don't hear that. What I hear is I do hear kind of sense fear around what's going to happen. Is AI going to take my job? I don't think that that's the case. I think what's going to happen is that this is a tool that is going to make everybody more productive. And if you think about testing, I was a developer before and then developer manager and then, that was before we started Reflect. Testing I feel like is unique in that the job is never done. Like there's really no one ever says, I don't have enough to test. Like there's always something more to test. And so a tool that's a lever on a tester's time I think is really powerful. And part of the fear of testers too is that and I think this is also legitimate like am I respected enough in the organization? Do developers listen to me, do product managers understand when I say that these requirements are not met? I think that this being a lever on people's time will help them demonstrate even more how important they are to that organization. That's my perspective on it. But obviously, I'm biased because I'm a vendor.

[00:17:15] Joe Colantonio Right.

[00:17:16] Todd McNeal But yeah.

[00:17:17] Joe Colantonio No, but I like that. Like you said, we're never done with testing anyway, so no matter how many tools we have, we still are going to always do testing.

[00:17:24] Todd McNeal It's an infinite inbox.

[00:17:26] Joe Colantonio Yeah, exactly. That's so true. As I mentioned, I have a new show and I covered a Playwright AI for Playwright and it really, really got a lot of people talk and a lot of people excited. So the two I saw some I think it's open source. So it's free. It's zero step. So maybe, why zero step you have a tool Reflect now zero step what's the difference and why? What's with zero step? What's this all about?

[00:17:51] Todd McNeal Yeah. So with Reflect, we're really focused on testers, mainly functional testers, but also automation engineers that are comfortable using low code and no code tools. Going back to this concept of fitting in your current workflow, that's who we're targeting with that, with folks who are developers are very have code-based workflow. What we wanted to do is release a tool that allow them to get the benefits of the AI that we've built but fits into that workflow. And we called it zero Step because we don't think that there's really much overlap between the user base of Reflect and this new tool zero step. And we didn't want users to have basically a bad opinion of it because they see record and play and say, I've never used it. So that's why we needed something different. So it's using the same AI technology behind Reflect. It's just it's a library within Playwright. So you can get the advantage of Playwright's runner, you get the advantage of including Playwright commands interspersed among the AI calls and you get the benefit of source control and versioning and all that good stuff.

[00:18:58] Joe Colantonio All right. Once again, because this is more audio, Can you explain a little bit more Is this someone's in their IDE, they import their library, Does it do IntelliSense for you? Does it write the code for you? Just write like I want to log into my application that knows what that means in context. So it writes the code for it to do that.

[00:19:15] Todd McNeal This is a little bit different than something like that, which would be more like copilot, GitHub, copilot. So the concept here is that you actually have the air prompts inside your source code. So if you think about a Playwright test, you would load up a page, a test within Playwright. So you might have many tests, but a test within Playwright. These generally load up a page you probably log in and then you perform some set of actions. Those would be clicks or inputs or assertions, things like that nature. Those calls can be replaced with what we provide, which is an AI function. And that AI function just takes a string and you describe what you want it to do. You could say in the AI function instead of like a page.click, you could say, AI click on the log-in button or replace a series of steps that fill out a form with AI, fill out the form with realistic values and on our website, zerostep.com, we have a bunch of examples of where you could use it. But why we went with that approach versus generating code is that we think that the AI is going to be most valuable in testing when the AI's running at runtime. And that's because going back to this concept of like creating tests and maintaining tests, the ideal in my opinion is that tests only break when there is a functional change to the app or there's a bug. And so we think that the AI gets you much closer to that. Then writing things that at the end of the day are based on the implementation of the page, the selectors, the individual elements that you need to fill out in the form things of that nature.

[00:20:52] Joe Colantonio It's quite different than compared, say, ChatGPT because ChatGPT would say, here's the locator, here's the CSS selector something to use. This is more, more condense and it seems like would save a lot of code.

[00:21:04] Todd McNeal Right? Yeah. ChatGPT I use that a lot for coding now. You might use ChatGPT to create a Playwright test like kind of the starter test. Or if you don't know, I use it a lot for asking questions like what's the command to fill out form field in Playwright and it'll give me that. This is more something that's actually in your source code. And so you're when you're running the tests, it's consulting the AI runtime to basically determine given the state of the page, which for us is what's the state of the internal page and the screenshot. What actions or assertions do I need to do to fulfill this particular prompt that's in that AI function?

[00:21:48] Joe Colantonio Nice. Two things. Is there any overhead when you use that approach, does it need to talk to a server with that API and then come back or is it all local somehow?

[00:21:55] Todd McNeal It does call a server. It would just be like if you're using the ChatGPT API or using ChatGPT, it's going to the ChatGPT server to answer that prompts. The zero step AI is also going to our server as well. The reason it does that is because the AI model is running, needs to run in our server. AI models today are generally the best ones are too big to fit on a laptop. And so just like we're using open AI under the covers, so we need to call open AI to answer some of the questions. You wouldn't be able to run that locally today. Someday in the future that might be the case. But yes, it's making server-side calls.

[00:22:37] Joe Colantonio So people that might be freaked out like I'm doing AI.fillform and generate data for me.

[00:22:44] Todd McNeal Yep.

[00:22:44] Joe Colantonio How do you trust it? Do you have good reports or outputs that say, Hey, look, maybe you expected this. This is what we found and this is what we did. Or how do you know make sure the AI is honest or it's not hallucinating or just doing something when it's really not doing what you think it's doing?

[00:23:00] Todd McNeal So it is a black box. We provide information about Playwright itself has a lot of built-in things to help you know what it clicked on and what it was doing. We hook into all that so you'll be able to see what it clicked on, what it filled out using the debug mode or the reports or something like that. We provide error messages when we're not able to fulfill it, but at the end of the day, the workflow within the AI tool is very similar to if you're writing selectors, you're probably going to write them, you're going to run it and you're going to watch and see what it does. And so that's the similar aspect with AI. The thing that's different about AI is that you do, since it is a black box, you'll need to have confidence that the AI is going to be accurate enough to do those actions. And again, that's why we provide a lot of examples to show, hey, this actually works in the real world and all of our Reflect customers or the two-thirds or more that are using our AI, it's built off the same AI. So it's production-ready.

[00:24:01] Joe Colantonio Why Playwright? Is it just because you have to start somewhere? I know a lot of people would love to see something like this for Selenium. Do you see that down the road? Like what's next, I guess for zero step in or anything else or even Reflect?

[00:24:14] Todd McNeal Yeah, we see a lot of applications for it. I think for us, since we're a small team, we wanted to start smaller with zero step. Basically, a large enough user base that we can see. Is this actually providing value and kind of nail that experience before we expand to other applications or other types of testing? But yeah, I could see it being useful in a lot of different areas. For Reflect, what we're really excited about is finally being able to fit into that manual testing workflow. And so today with the Reflect side with integrating with test case management tools, we integrate with a lot of the popular ones, Zephyr, Xray, and TestRail, but we would look to expand to other test case management tools in the future. But yeah, just kind of you start somewhere and you expand as you go.

[00:25:02] Joe Colantonio How extensible is it? Say, someone has some weird, wonky in-house test management tool. Do you have an API they can look into and write it or do they have to work with you to get that up and running?

[00:25:13] Todd McNeal We do have an API. For the extension, we would need to add support to it. We've been collecting feedback on folks who are using tool test case management tools that we don't support right now. There are a lot of test case management tools, a lot of good options. So that's something we're looking at. But with Reflect, even if you're using a test case management tool, if you want to use the [00:25:36]AI piece, [0.1s] you can use Reflect, you can use our API to sync the data over to the TCM tool.

[00:25:42] Joe Colantonio Going back to zero step, like I said, I got a lot of people really interested in it. Is this an open source and why would you make it open source or is that part of it? Is it like a Cypress model where some of it's open source and the other is paid?

[00:25:55] Todd McNeal Yeah, the library itself is open source, there are two components to it. The first part is the JavaScript library that integrates with Playwright and that's another thing that we may change in the future. We've had feature requests for adding Java support, Python support. So yeah, we will probably be adding support for other languages in the future. But the library, what that does is hook into Playwright. When you make an AI call with this AI function, it has a prompt in it. We send that to our back end, which has the prompts and the state of your application. And then our back end is what has the AI to determine what action or assertion or set of actions to do. And so that's done through this. We do web socket connection, which is a persistent HTTP connection. The library is open source. We welcome any contributors. The server, just like open AI is closed source.

[00:26:48] Joe Colantonio Awesome. Last time you made a prediction, I think you said something like was multimodal is going to be an important next step and currently, open AI's new model GPT 4 Turbo with vision OpenAI now has multi-module capabilities. Can you talk a little bit more by why you think that's important or how that's really going to help testing because before I don't think it wasn't there at all. You said I could see this happening. Now it seems like it is happening. So what could people expect from this?

[00:27:16] Todd McNeal Yeah. So back in July, there were a lot of rumors about what was going to come in the next iteration of GPT 4, which is still the current version of GPT. They now have a new version of 4, which is GPT 4 Turbo, and that has this vision component. Back in July, I said we were really excited waiting for support for Multimodal. Multimodal all that means is that in ChatGPT and with OpenAI API, the GPT API. Before multimodal, all you could send was text basically. And so you could send text. You get text back. But with multimodal, now you can send text or send images or text and images and get back images or text. There's Dolly 3. That was something that actually was released Dolly was released prior to GPT 3 and had some you saw a lot of like viral pictures about it. And then other models came along like stability AI, stable diffusion, and mid-journey. Dolly 3 is really good. And the same tech that's in Dolly 3 is what's used with Multimodal for GPT 4. Basically what that allows you to do and what you'll see more products coming out soon about with this is being able to take visuals and take action based on that. So we're incorporating that into our model to be able to take things that normally you wouldn't be able to automate like very visual applications or verifying images or things like that and do that through A.I. which before you really it was really hard to do.

[00:28:57] Joe Colantonio All right. This is crazy because I think this is the future because a lot of applications I used to work on for insurance companies and health care was either legacy or custom components that none of the tools could recognize. So using image-based seems like it would have helped with that. But now it seems like this is even more reliable because back in the day a pixel was off or would blow things away. It almost seems like you wouldn't even have to write any code with selectors per se other than here's an image I want to get, get that image and compare it to that image and continue on and be able to take one test that runs against a web browser and maybe write it. It gets a thick client application.

[00:29:35] Todd McNeal Yeah. Or the mobile app or something like that.

[00:29:38] Joe Colantonio Or the mobile app. Right, right, right.

[00:29:40] Todd McNeal Yeah. No, I think you're exactly right, Joe. It's really exciting because it gets us closer to the AI acting how a normal tester or end user would use an application. And I think that's one of the things that's interesting about our prompt steps and I think would apply to anything using this vision platform is to get the A.I. to do what you want. You're not describing it in terms of the selectors that you want, you're describing in terms of the end-user behavior. It really comes down to those fundamentals of functional testing, which is how do I know how the application is supposed to behave and how do I test that and how do I communicate that effectively?

[00:30:19] Joe Colantonio Oh my God! Because the resistance to automation testing back in the day is like this. Now a real user does it? Well, I see people resisting AI and you're like, Well, this is actually how a user does it, leveraging AI doing visual. So it's almost kills a lot of those resistance, I would think. Hopefully.

[00:30:35] Todd McNeal I would think so, yeah. And I think it might mean unlearning some things that have been popular in automation for a long time. I think selectors is something that talking a couple of years from now. New automation code is likely to not be using selectors that much. But yeah, it's going back to the roots of testing, in my opinion.

[00:30:55] Joe Colantonio Love it. Okay, Todd, before we go, is there one piece of actual advice you can give to someone to help them with their AI Automation testing efforts? And what's the best way to find contact you or learn more about Reflect?

[00:31:07] Todd McNeal Yeah. With AI and this might have made what I said last time, but just give it a try. A lot of folks. I think I would love to see disclaimers from folks who have never tried it but are offering their opinions on it. You can have your opinion, but I think you should really try it first because you'll see for yourself if it works for you and your use case. I myself have been really impressed with it. For folks who want to try out Reflect, you can visit our website, Reflect.run. We're free to try it for free here and an unlimited use two-week trial. For automation engineers and developers. You want to try zero step, you can visit our site at zerostep.com. It links to our GitHub repo and also links to how you can install it in your Playwright tests.

[00:31:50] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a477. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:32:26] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Nicola Lindgren Vernon Richards TestGuild Automation Feature

The Software Tester’s Journey with Nicola Lindgren and Vernon Richards

Posted on 12/22/2024

About This Episode: Today, we dive deep into how to advance your career ...

Alex Kearns TestGuild DevOps Toolchain

Leveraging GenAI to Accelerate Cloud Migration with Alex Kearns

Posted on 12/18/2024

About this DevOps Toolchain Episode: Today, we're diving deep into how you can ...

Three people are pictured on a graphic titled "AI Secrets You Should Know." Set against a striking red background, the image features the ZAPTALK logo in the top left corner, highlighting discussions on AI and automation.

The Secret to Embracing AI and Automation (ZAPTALK EP 02)

Posted on 12/17/2024

About Episode Join Alex (ZAP) Chernyak, Joe Colantonio, and David Moses in episode ...