AI-Assisted Testing Platforms with Todd McNeal

By Test Guild
  • Share:
Join the Guild for FREE

About This Episode:

In this episode of the TestGuild Automation Podcast, host Joe Colantonio and guest Todd McNeal co-founder of Reflect, an AI-assisted test automation platform. Delve into the revolutionary features of Reflect, an all-encompassing automation tool. They discuss how Reflect, utilizing AI, enhances onboarding processes and resolves selector issues in automation. The conversation covers the tool's capabilities, including adding assertions, enabling independent actions, and validating input steps. They also discuss the importance of visual validation and the integration of chatbot, GPT, and AI technologies for creating resilient tests. The episode underscores Reflect's unique offering of pairing API calls with UI calls, simplifying automation processes, and facilitating quick, maintainable test creation.

To see a real-world example of generative ai in action, register for our webinar on July 25 on Building Automated Tests Using Generative AI. Register now => https://testguild.com/webinar-building-automated-tests-using-generative-ai/

Exclusive Sponsor

Discover TestGuild – a vibrant community of over 34,000 of the world's most innovative and dedicated Automation testers. This dynamic collective is at the forefront of the industry, curating and sharing the most effective tools, cutting-edge software, profound knowledge, and unparalleled services specifically for test automation.

We believe in collaboration and value the power of collective knowledge. If you're as passionate about automation testing as we are and have a solution, tool, or service that can enhance the skills of our members or address a critical problem, we want to hear from you.

Take the first step towards transforming your and our community's future. Check out our done-for-you services awareness and lead generation demand packages, and let's explore the awesome possibilities together.

About Todd McNeal

Todd McNeal

Todd is the co-founder of Reflect, an AI-assisted test automation platform. Prior to Reflect, Todd held software development and engineering management roles at a handful of F500s and startups.

Connect with Todd McNeal

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:25] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. And today, we'll be talking with Todd, all about A.I. Assisted Testing Platforms, Automation, Codeless, Visual Testing, and a whole bunch of hot topics that I think you need to know about as we go forward this year and in the coming years. If you don't know, Todd is the co-founder of Reflect, which is an AI-assisted Test Automation Platform. Prior to Reflect, Todd has worked with multiple different companies in software development and engineering management roles and a handful of Fortune 500 in a much, many different start-ups as well. He has a lot of experience. I'm really curious to dive into this topic and learn a little bit more about how Reflect can help with automation as well. You don't want to miss this episode. Check it out.

[00:01:08] This episode of the TestGuild Automation Podcast is sponsored by the Test Guild. Test Guild offers amazing partnership plans that cater to your brand awareness, lead generation, and thought leadership goals to get your products and services in front of your ideal target audience. Our satisfied clients rave about the results they've seen from partnering with us from boosted event attendance to impressive ROI. Visit our website and let's talk about how Test Guild could take your brand to the next level. Head on over to TestGuild.info and let's talk.

[00:01:40] Joe Colantonio Hey, Todd, Welcome to the Guild.

[00:01:46] Todd McNeal Thanks for having me, Joe.

[00:01:48] Joe Colantonio Great to have you. So the bio was a little thin, so I'm just curious to know how you got into creating a test automation platform. What led you there?

[00:01:56] Todd McNeal I had worked at a previous startup with my co-founder and as a software developer had done end-to-end testing in different roles. And at that time, it was a lot of Selenium. And so what we had when using those tools is just that as a developer, as a product developer, you don't really have that much time to go and maintain the tests. And so, we would often, we had a few cycles where we would go and try and build it and then it would work for a little while and then start failing, and then we kind of stopped running them. And so it wasn't that there wasn't value in the test, it was just that it took too much time for us to maintain. And so what we ended up doing at that particular startup, where around 20 engineers is we would just manually test on every deployment. And as a developer, that just felt wrong. You're always looking to automate things, try to remove the manual processes. So that was really the impetus for us to start the company.

[00:02:52] Joe Colantonio Nice. I know there are a lot of automation tools out there beyond Selenium. What made you think, for what we need in what we see in the market needing that? What makes Reflect different then why do you create it to be maybe just issues that other tools out there weren't addressing?

[00:03:08] Todd McNeal Yeah, there's a lot of tools in the market and I think it makes it for testers and folks evaluating the tools. It's difficult to really know one from the other. A lot of tools make the same claims, but I think that's because everybody faces kind of the same issues. When you're doing automation. It's the speed of creating the tasks and then the ability to maintain it over time. I think a lot of engineering organizations, they're trying to move faster, and nowadays, they're trying to do more with less. There's just less time. There are maybe less people than there were on the team before. And it's hard to get everything done. And so what's different about us is that we really focus on the two core problems, making tasks faster, create, and easy to maintain. And so one of the things about our tool is that we're recording playback tool, but when you're recording, you're doing it in a cloud browser, you're not doing in an extension. We do that so that we can control the infrastructure and handle things for you that maybe other tools couldn't. Things that would be a best practice if you were a Selenium developer or a Playwright developer doing things like auto retries and detecting network requests before you proceed to the next action, you might need to hand code some of that or know how to tweak things. We just do that on your behalf. And it's a lot about doing things on the cost user's behalf that they would have to normally write and code.

[00:04:32] Joe Colantonio Dive in a little bit more about that. What feature would that be? So someone records, a lot of times they have to actually add await. Because of running in your platform or your SaaS cloud, it knows that the application is completely loaded without the user having to know or explicitly call it out, is that how it works?

[00:04:48] Todd McNeal That's right. There are a couple of key like use cases words. The first is on page load and this is a lot of platforms do this, so you wait for the page to be ready before you interact with it. Kind of the next level is let's say you do some interaction like a click. How do you know when it's ready to do the next action? A lot of applications right now. There isn't a next-page load. It's all single-page apps and very dynamic. What we need to do, which I'm not sure if any other tools do, is automatically detect things like animations, network requests, or set of network requests that happen after that click. If you do a click, it sends a network request, and then, 2 seconds later, that network request is finished. We'll wait for that to be complete before continuing.

[00:05:31] Joe Colantonio Cool. And it does all under the covers without the user having to configure themselves. Very nice.

[00:05:37] Todd McNeal That's right.

[00:05:38] Joe Colantonio Cool. So. Well, I think it's also kind of interesting as you're a developer, start off as a developer, and the solution seems to be like a codeless solution. I know codeless become more and more popular. And it almost seems like someone that's an experienced tester goes, Oh, I'm a developer, I don't need codeless. But you are a developer and you create a codeless solution, and you would have used this as a developer. So why codeless then?

[00:06:01] Todd McNeal I think that's ultimately the best way to solve the problem. We didn't start saying we want to build a codeless tool. We started and said basically, if we could build a solution, that would be the best tool for solving this problem, which is test breaking. And it's hard to keep up with the pace of development. Really what you need is something different than the code-base paradigm that's been around since Selenium. You can make that work if you have a team dedicated to that test, but for a lot of organizations, they don't have that and maybe they don't want to do that. And so you need something at a higher level of abstraction that will actually assist you in managing the things that otherwise you'd have to go in and do it yourself.

[00:06:41] Joe Colantonio Absolutely. Besides codeless, I believe you also do visual validation testing or visual testing. That's sometimes not everyone has. So why add that feature?

[00:06:51] Todd McNeal Yeah, so we added that really early on, and that's been a really key feature for us. Again, it just kind of came down to like if we look at we started with our test cases that we would have in our company and then we looked at as our customers came to us and said, here are the test cases we need to support. A lot of times when someone's evaluating a codeless or any tool, any automation tool. One of the key questions is can it actually automate our existing test cases? You want to know that you can go from 20% coverage to 80% or more coverage and not hit some limitations on the tool. And a lot of times what that means is that you need visual testing because if you would look at a manual test script, it would say, click on this, validate that on the next screen there's a blue button, validate that the thing appears on the right side instead of the left side. You need that level of expressibility in order to really meet your requirements. So yeah, it was there from very early on.

[00:07:48] Joe Colantonio Nice. I know sometimes people shy away from image types of validations because sometimes they become flaky as well. Because it little pixel differences. Because it's running in your platform, does that make the approach you're doing better? I'm just guessing. I don't know.

[00:08:03] Todd McNeal Yeah. So by running in our infrastructure, we have a lot of control over how we do the visual validations. So one area where it can get really flaky is if you're doing visual validations across cross-browser, right? Because each browser renders things differently. And so what we do in our platform is we run the infrastructure, run the test. We're running the Safari VMs and Chrome and Firefox, etc. VMs. We create individual screenshots for each browser. And so you can think of the first run of, say, Edge as capturing the screenshots for Edge. And so that helps us so that if you're not cross-comparing or using something you captured in Chrome, it helps reduce the flakiness there. But there is, yeah, visual tests. If you're not careful, if you use too many of them, it will be flaky because you'll basically get notified of changes that you don't care about. What we advise is when you're thinking about doing just a regular text validation versus a visual validation, think about like if this changed, would you open a bug ticket? And what severity would that be trivial or would it be major and that should kind of guide you as to whether you wanna do visual or not.

[00:09:12] Joe Colantonio Gotcha. So I know like testers I speak with, not only do they use tooling, but they also use like browser platforms in the cloud to run against all the different devices. Do you do the same as this? Just happens to run the browser, but you don't necessarily include all the infrastructure to scale. How does that work?

[00:09:28] Todd McNeal It's all included in our platform, you don't need to use an external test grid at all. It runs. We handle that all for you. And one of the nice things about that too, is that the same infrastructure that we use when we're spending up to record the tests because it's in the cloud, is the same infrastructure we used to run it. You have less of these issues where, I recorded it locally in an extension, say, and then I try to run it in CI and CI, the environments. There are all sorts of differences from maybe it's underpowered to the browser dimensions is different or the browser version is different or you're in a VPN, this approach factors all that out. So it removes some of the friction of getting tests running locally to running in CI.

[00:10:12] Joe Colantonio Perfect. I also know a lot of times people just focus on UI automation. What are your thoughts on API testing? Does your platform handle it? Do you have any best practices of when choosing API testing versus like a visual type of UI test?

[00:10:28] Todd McNeal We do support API testing in our platform and one thing that's unique about it is that you can have API calls side by side with a UI call like clicks and inputs in the same test. What's been interesting about that is the use cases that have emerged from our customers doing that. What has allowed some of our customers to do is be able to really reduce the size of a UI test because whereas before they might be using the UI to let's say the UI test is I want to edit an existing record. I need to go into my test application and edit it. And that's a destructive action. Once I edit it, it's changed. Maybe I need to go back in and reset it or something. With an API call, I can go in and create a new record with a single API call and then go and edit that record directly. And because my test is creating new resources every time, it doesn't matter if it fails in between, it's not going to affect the next test, which has been really, really interesting. The other use case that we see is a lot of applications are integrating with third-party things, whether you're an e-commerce site, integrating with Stripe or shipping or tax integrations or a SaaS platform, again with payment or other things, you don't want to be and have to go to a third party application in the UI to validate something. In those cases, you really need to just make an API call to validate that the orders and stripe or the shipping rate matches what is in the back-end system. So API calls are really helpful for that.

[00:11:57] Joe Colantonio Great. Besides API testing, we talked a little bit about codeless. Lots of times testers or developers like to actually see the output and be able to change things like how customizable is this? Some people are forced to use codeless or able to see the code that's generated in the background that they need to customize it. How's all that done?

[00:12:17] Todd McNeal It's completely abstracted away, except that you can have code steps in your tests. One way to think about it is it's not generating like Selenium code and running that code like something like Selenium IDE or other tools like that. It's generating something that can run within our Reflect runner, but you can add code to it if you need to express something in code. So you could use code to extract cookies to validate things like that or maybe there's something that we don't support. You could do that in code.

[00:12:48] Joe Colantonio So would it be like a JavaScript code step or something that would drag on?

[00:12:52] Todd McNeal Exactly, yeah. So JavaScript code step and it supports synchronous or asynchronous calls. So you can even do things like network requests or use any JavaScript API that would be asynchronous, which a lot of the new ones now are by default, asynchronous. Yeah.

[00:13:07] Joe Colantonio A lot of times people struggle with speed of test, because it's running the way in your special infrastructure, so the tests run faster. Is that possible?

[00:13:16] Todd McNeal Generally, they do run faster. Yeah. When we benchmarked it against other tools. And part of that is because yeah, we control the infrastructure so we can make investments in things like faster startup times. We can do things that would be not possible to do if we relied on an external grid. So for example, we have faster startup times because when we load up our infrastructure we pre-cache the browser sessions. Every browser session is isolated. You'll never have, we destroy the session after it's complete, but you'll have that browser session already running. And so it'll just be a second or two in order to jump into the session. Another cool thing that we can do with the infrastructure is jump into a running test so you can actually run a test and click a button to watch it live and actually interact with that test live if you wanted to fix something up.

[00:14:06] Joe Colantonio Very cool. So it's like a live breakpoint almost sounds like.

[00:14:09] Todd McNeal Yeah. And in fact, that's what we call it. You can add breakpoints to it, you can pause it, and kind of move where you are in the test. So if you wanted to add steps near the beginning, but you've run to the end, you can just kind of drag this cursor up and say, Now I'm at step five.

[00:14:24] Joe Colantonio Cool. I like it. So I guess another thing that's big, obviously, is AI and it sounds like you started off you've been around before incorporating our features, but I think I noticed you created some AI features.

[00:14:36] Todd McNeal That's right.

[00:14:37] Joe Colantonio I thought maybe dive into AI obviously a huge topic. Some of the AI features I think have some called like an AI prompt steps and AI assistant. I believe so. Can we talk a little bit about those AI features, why you created them, what they do?

[00:14:50] Todd McNeal We hadn't incorporated any AI features up until a few months ago just because we were focused on the reliability of the test, the accuracy of recording, and just kind of getting the breadth of features so that customers can meet their requirements. But once we started playing around with ChatGPT, we realized this is a pretty big game changer, I think not just for testing, but for a lot of things, but certainly for testing. And so our AI features integrate with open AI using the same technology that ChatGPT uses. And the first feature, like you mentioned, is AI prompts, and it's basically you can think of it as an arbitrary action or set of actions that you tell Reflect to do. So it doesn't require any sort of syntax. It's not like conforming to Gherkin or any sort of custom syntax. You just write what you want it to do, like click on this button or input something and it does it. And what's cool about it is that you can use anything that Open AI model is trained on as part of your input. So for example, if you were in a health care application, maybe there are some things that are specific to the domain of health care that you need to enter. ChatGPT is trained on lots of datasets, including healthcare datasets. So you could enter in things that maybe only someone was specific domain knowledge would know. Like in some of my demos, I show. Like you could enter in the median salary of nurses in the United States. You could enter in phone numbers, and sample phone numbers from any country, and it knows the correct area code and kind of length of the phone number to use. So there are a lot of things like that you can do.

[00:16:25] Joe Colantonio That is cool. Chrome can say like given this patient, give me the top values for blood pressure or something or automatically.

[00:16:34] Todd McNeal Exactly.

[00:16:35] Joe Colantonio That is wild.

[00:16:36] Todd McNeal Yeah. And so what that allows you to do is a lot of the things that are already in your manual test scripts or in your requirements like the specs that you have, maybe you're doing BDD. You can just take that and put that into a prompt step. We think that eventually you could take manual test scripts and get 90% of the way to an automated test without any sort of user interaction. And so that's kind of the direction that we're heading right now.

[00:17:00] Joe Colantonio All right. So when I say it's just a text box and it seems like pretty free-flowing. I can get pretty crazy with my ChatGPT prompts. Like, it's just like that, that you could just type in whatever you're thinking and it will automatically. I mean, obviously, it will try to give you what you want, but it's that free-flowing?

[00:17:17] Todd McNeal Yes, it is arbitrary information. Basically, what you can think of it as it's a free-form prompt that results in one or more sets of actions in the browser itself. And so today, with those results, you are things like clicks and inputs and things of that nature. What we're adding to it is the ability to do things like assertions. So if you would again think of a manual test script, you would have assertions intermixed with these inputs, like go to log in and then verify that my name is on the top right. That kind of verification is going to be something that you can enter in a prompt pretty shortly in Reflect. And then another thing that we do is we'll use the A.I. to determine if what you entered is actually a set of steps. And then we'll split it and show it to you. So you could actually, one thing that we've seen our customers do is basically copy-paste a whole list of steps, and then it will show you the individual steps and allow you to edit them if you want and then run them right after.

[00:18:14] Joe Colantonio Well, I'm still really digging that test feature data thing where you can just have it just create realistic data for you based on what it knows about that particular segment. So that's really cool.

[00:18:25] Todd McNeal Yeah.

[00:18:26] Joe Colantonio Is that used a lot? I mean, new customers, is that something you've seen as one of the killer features or the?

[00:18:31] Todd McNeal Yeah, I would say so far it's been pretty early, but what we've seen from our customers so far is that's definitely one of the features they like. Another thing is the expressibility of it. There are some scenarios that just if you're not coding, it's just very hard to express. And I think automation folks have had legitimate criticisms for no code low code tools about this. Like an example would be, I have a table of values and I need to click on something associated with a row, and it could be anywhere in the table. That's just very hard to do without being able to drop down the code to do it. But with AI, it turns out you can do that just by expressing it in the English language. So you could say, find the row associated with Joe and click on the edit button and it will work whether you're on the 1st row or the 10th row, or the 7th row.

[00:19:19] Joe Colantonio Wow! Yeah, Obviously tables tend to cause issues and also graphs. I don't know if that can help at all with graphs or anything like that.

[00:19:26] Todd McNeal Yeah. Right now it's not very good at graphs because it doesn't see the canvas. What's missing right now with let's say the ER and AI is there's no off-the-shelf AI, that is what's called multimodal, which means they can process text input as well as image input. Once that technology is released and so OpenAI said, maybe early next year or some other ones that are competing with it are trying to get there earlier, that will open up even more use cases like graphs, interacting with graphs, or some things that are very visual. So like if you were a gaming application, where your everything is on a canvas, like a slot app or something like that, very, very hard to automate that right now. But with a multimodal AI, that will now become possible because it will see the visuals as well as kind of the text information.

[00:20:14] Joe Colantonio It's the first time I heard of Multimodal. I'm going to check that out but that seems like once that's set live, that definitely is going be another game changer. So besides A.I. prompt steps, which to me sounds really, really cool. People need to try it out for sure. Actually, for people actually people to see this or try it, do you have a free trial at Reflect that they can do it?

[00:20:34] Todd McNeal We do. Yeah. So our website is Reflect.run and just type that in the browser and you can sign up for free. Our AI features are launched to everybody, so even in the free version of Reflect, you'll be able to try it out and when you sign up you get a free unlimited trial for two weeks. So there are no restrictions on how many tests you can run for the first two weeks.

[00:20:54] Joe Colantonio Cool. Cool. All right. So besides, try that out once they have that free trial, I think once again, you have an AI assistant. And I really like this approach. A lot of people are coming up with using AI to assist, not replace type deals. In the name itself, I guess, tell us what it is, But can you just explain a little bit about what the AI assistant does with Reflects technology?

[00:21:11] Todd McNeal Yeah. So our AI assistant, you can think of this as like a next-generation self-healing. So self-healing technologies have been around for a while, and they're basically to work around problems with selectors where a selector or like an XPath or CSS selector uses the page styling the page structure to determine where to interact with something. And one of the main frustrations with automation is that as the page changes, those selectors can break and you won't know until your automation runs. What the AI assistant does, is it use the same technologies or AI prompts to find the element when all of the selectors are invalid. So in Reflect, when we record tests, we generate an English language description of each test step. So like click on the submit button input Joe in the first name fields and with the AI assistant actually takes that and uses that as the prompt. By using the same approach, basically the better your tests are documented because you can edit these descriptions, the more information that AI knows how to perform that action when the selectors are invalid.

[00:22:13] Joe Colantonio Cool. I guess the output then is very readable as well then because it will all be English like and you can just hand it to someone. Maybe that's new on the team. Say, here's what the test doing.

[00:22:22] Todd McNeal That's right. Yeah. And we've heard from several customers that they use Reflect for onboarding new folks, even people outside of the testing organization because it's very you have a video, the video is synced up with the steps that we generate. And you could go and see what is registration look like for this application. You can actually very quickly find it and then watch a video of it. And one thing that we do with our AI as well is that in some cases it's hard for us to generate an English language test description ourselves. If you click on something which doesn't have any text in it, like a logo or something in a graph or something like that, there's no text information for us to go on. But what we do is we ask the AI to give us a description. So when you click on it, we tell the AI, here's the state of the page, here's the action that the user took. How would you describe that? And so you'll see it say, it clicked on a logo in the top left. It clicked on the highlighted region in the map. And so you have that auto-generated description for later when you're looking at the test and kind of forget what it's doing or when the AI needs to go and figure out where this thing moves on the page.

[00:23:31] Joe Colantonio Does this help with multi-languages? I know a lot of times when I had, we solved an application, it was in English, but then we had a test in all the different regions that we solved the application to and test all the different languages, outsource it, get the translation and then check that the translation was correct. This seems like a good use of AI, I don't know if the something your application does or is it only handles English.

[00:23:52] Todd McNeal AI is able to understand different languages. So its internationalization is a good use case for it. Also, a common one with internationalization is wanting to run the same test case across multiple languages to test for your translations. It's not an AI feature, but within Reflect you can parameterize all of your tests so that you could basically create a spreadsheet and say, for this particular test, I'm going to run it ten times for the ten different languages and then validate that all the translations are what I expect.

[00:24:19] Joe Colantonio Yeah. Are there any current limitations for the AI features? I know a lot of times tools have issues with iFrames or Shadow DOMS and things like that.

[00:24:30] Todd McNeal Yeah. So right now with our AI feature set, the big limitation is the lack of assertions, and that's something that we're adding in our tool in the next couple of weeks. We do have support for Shadow Dom with Salesforce, so you could go in and use this for Salesforce. And that's a great use case because Salesforce has a lot of these forms and with a lot of fields and tables and grids and things of that nature, iFrame support we're adding in the next couple of weeks as well. I would say in the next month there shouldn't really be any limitations in terms of what people want to do, probably up until we add, probably the last limitation is going to be that multimodal approach. Until we see that computer version, there are going to be some use cases we can't handle. But yeah.

[00:25:11] Joe Colantonio As we mentioned, there are a lot of tools out there, I believe different tools fit different teams, better not one is better than the other. Just certain situations lend themselves to different technologies. Would you say Reflects target user is are you thinking someone who listens to this that falls into this category would really, should jump on this right now?

[00:25:29] Todd McNeal Yeah, so one of the things that was challenging I mentioned that I come from a development background and so we built this for us first. But one of the things about building a business is understanding who your user is and you really want to find the users that love the products. One thing that we found is that automation engineers and SDET folks that are doing coding, they just want to code. I think that's kind of the reality is that it was kind of a hard one lesson is like it doesn't really matter how good Reflect is a lot of SDETs and automation engineers just want to code. And so we see our user as primarily the tester who is not doing automation today or doesn't want to code but wants to automate. Those are the two users that tend to love this tool. And increasingly we're seeing front-end and product developers like full-stack developers using it because it's a time saver so they can own end-to-end testing without actually having it take up too much of their time.

[00:26:21] Joe Colantonio Great. And Todd, this is mainly audio only, but I know we're actually doing a webinar on July 25th that's going to go over building automated tests using generative AI so people can actually see this in action. Maybe a little teaser for people that are going to join us on the webinar, what they're going to actually see so they can actually see if this is really going to do what it sounds like it's going to do.

[00:26:45] Todd McNeal Yeah, I know a lot of folks are exploring ChatGPT and other AI technologies and you may be exploring technologies that allow you to do work faster, so maybe generate test cases or generate Selenium scripts or Cypress scripts faster. What we're really focused on is kind of the step change in A.I., which is what would it look like if the AI can be your assistant in building out tests? And so what that really means is being able to record a test and have it work, even if there are large-scale changes in your application, and means pulling in your manual test scripts and getting 90% of the way to getting a fully working automated test and is allowing you to express things that right now no code and low code tools have a very hard time expressing, which means that your percentage of things that you can automate with these tools becomes even higher. And so that's really what we're focusing on, is showing that in Reflects and these are all features that you can sign up for and use for free today.

[00:27:45] Joe Colantonio Yeah, I highly recommend anyone listening to this. Check out the webinar I will be doing on July 25th. Even if this is after the fact, it'll be available for replay and you'll be able to find that at test Testtguild.com/webinar. Hopefully, we'll see you at the actual event because I think it's going to be a lot of value and I think we're giving away a free book. If you do attend you will get a free book, so definitely something you should definitely check out. Okay. Todd, before we go, is there one piece of actual advice you can give to someone to help them with their automation AI testing efforts? And what's the best way to find contact you or learn more about Reflect?

[00:28:17] Todd McNeal I would say one piece of advice I would give to folks thinking about automation is thinking about how to manage the state within your automation tests. One area where things get flaky is just kind of the tool when handling things that change naturally about the application. But the other area where test flakiness comes in is the state of the application changing over time. And so if you can manage that, that makes your test much more resilient to changes. And so within Reflect that would look like things like using our API testing feature to create data as part of the test. It would be making sure that you build your tests so that they don't rely on another test running first and then this one running if at all possible. But yeah, that's what I would recommend and I'm happy to share any additional information, you can find me on our website. Reflect.run. My email is Todd@Reflect.run and I'm on LinkedIn and a few other places.

[00:29:11] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a456. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:29:47] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Michael Martinez TestGuild DevOps Toolchain

A Day In The Life With Dev Op with Michael Martinez

Posted on 07/24/2024

About this DevOps Toolchain Episode: Join us as we uncover DevOps's secrets, exploring ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

AI for Test Coverage, Why Playwright is Slow, Crowdstrike and more! TGNS129

Posted on 07/22/2024

About This Episode: Do you know how much of real production usage your ...

Mark Creamer TestGuild Automation Feature

AI’s Role in Test Automation and Collaboration with Mark Creamer

Posted on 07/21/2024

About This Episode: In this episode, host Joe Colantonio sits down with Mark ...