SmartDriver: Let AI Do the Heavy Lifting with Chris Navrides

By Test Guild
  • Share:
Join the Guild for FREE
Chris Navrides TestGuild AutomationFeature

About This Episode:

Want to quickly see a free way to add AI to your Selenium, Cypress, or WebDriver.io automation tests? In this episode, Chris Navride, founder of Dev-Tools.AI, shares how to write tests with visual AI using their open-source library that extends existing test frameworks. Discover how to integrate with your framework, find elements without digging through the page source, teach the bot, and much more.

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Chris Navrides

chris navrides

Chris Navrides is the CEO of Dev Tools AI, working to help developers not have to fix broken and flaky UI tests. Prior to Dev Tools he was the VP of engineer at test.ai, which is built the first AI based mobile & web testing solution. Before that, Chris worked at Google on automation testing for Google Play and mobile ads, and Dropbox on their mobile platform team. Chris received his Bachelors and masters from Colorado School of Mines.

Connect with Chris Navrides

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Welcome to the Test Guild Automation Podcast, where we all get together to learn more about automation and software testing. With your host, Joe Colantonio.
Joe Colantonio: Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. Today we'll be talking with Chris all about Smart Driver, the state of AI in testing and what testing may look like in 5 to 10 years. You don't want to miss this episode. Chris has a lot of experience in this area. He was the VP of Engineering at Test.ai, which built the first AI based mobile and web testing solution. And before that, Chris actually worked at Google on automated testing for Google Play and for mobile ads and Dropbox on their mobile platform team. So Chris is an expert in this area. Really cool solution that's coming out that's been now, I guess he just released it. Smart driver that's currently free to use and you definitely should check it out after you listen to this episode. You don't want to miss it. Check it out.
The Test Guild Automation Podcast is sponsored by the fantastic folks at SauceLabs, their cloud-based test platform helps ensure you can develop with confidence at every step from code to deployment, to every framework, browser, OS, mobile device, and API. Get a free trial. Visit testguildcom.kinsta.cloud/saucelabs and click on the exclusive sponsor's section to try it for free today. Check it out.
Joe Colantonio: Hey, Chris, welcome to the Guild.
Chris Navrides: Oh, thanks. Thanks for having me. This is awesome.
Joe Colantonio: Cool. Great to have you. So, Chris, before I get into it, is there anything I missed in your bio that you want the Guild to know more about?
Chris Navrides: No. I think you covered everything.
Joe Colantonio: Awesome. So how did you get into AI?
Chris Navrides: So funny story. I was actually working at Dropbox at the time and I was building some bots to do testing and so we were building bots and sort of the tree of the entire application there and then, having bots go through and test the whole app. And so, I went to a conference and met Jason Arbon, who was the founder of Test AI. And so we hit it off and over a couple of drinks at the bar late one night, he was talking about A.I. and so that was sort of my intro to it. Eventually, he recruited me out. I was at back to Google at the time, and so he was like, Hey, why don't you come work on this thing? We're going to do A.I. for testing and actually solve this problem. And he also got me over there. And so, I like to say I was formally A.I. curious and then, dove in with both feet and kind of looked at the industry in that area and went from there.
Joe Colantonio: Nice. So a lot of people's dream job is working for a company like Google. So what did he say that made you go, okay, I'll do it. I know Jason is very persuasive. Was there anything specific that he said?
Chris Navrides: Just blink in those eyes?
Joe Colantonio: It's baby blues. I don't know if he has blue eyes.
Chris Navrides: I think Jason is just a very passionate person. And I think that's what him and I resonated on is. Him and I both had the shared experience. So we want to just solve this problem. And this problem is that testing in general the testing frameworks haven't really advanced in about 20 years. You're still for UI-based testing. You're still going in. You're building selectors for elements that are bound to change. So you're going to be doing this constant maintenance. And AI is finally advanced to that state where, hey, things can finally get good enough that it's starting to surpass what I think traditional automation can do. Just looking at the UI and the visual side. And so he was kind of talking about the sort of vision and getting me excited. And we were kind of talking about why isn't a company like Google or Facebook, some of these big mega-corporations doing it? And I think what we kind of discovered was, look, at most companies, testing is an afterthought. If you look at testing across the board, it's sort of thought of as, the reason why you can't launch on time. And so people don't tend to want to invest money if they have $10, they'd rather put it into the new feature, which will help drive revenue as opposed to necessarily building a better test framework, putting an extra higher on the QA team. And so that's where I think the startups and especially some of the new companies coming out are fully dedicated to that one mission. So that was what really got me up like, hey, we had a chance, a couple of shots on goal to try and solve this.
Joe Colantonio: Nice. I was going to save this question too later, but I know your company recently got accepted into YCombinator and you're like the second test in company to be that I've spoke to recently to get accepted. And I think of YCombinator, I think of not real startups, but like startups and like social media kind of companies now. But it looks like they're starting to take testing seriously. And you said companies don't seem to be taking testing seriously. So have you seen a change in the way maybe angel investors and investors are looking at this as a real issue as well?
Chris Navrides: Yeah, I mean, talking to investors, I think the number one question that kind of keeps coming up is why is there not another mercury out there? So Mercury was about 15 plus years ago, sold to HP. It was a $2.5 billion exit, I think if I recall correctly. Since then, the amount of money people spend on testing, testing services, you name it has just grown so much. And yet the whole field is very fragmented. There's not kind of one major company that's phoning it in the same way that you see like in the transportation space. There's Uber and Lyft, right? And they're massive. If you look at search, there's Google and being in the U.S., those are massive. You don't tend to see that. Instead, what you see is just a whole bunch of different companies from each different sort of vertical, and they don't tend to massively go outside of their vertical. Some of the private equity firms are starting to sort of coalesce a lot of these testing services into one company. And so you're starting to see some of those, but there isn't a clear winner like there is with an Uber or a Google or something like that. So I think that's what I see is kind of looking for and they're looking for smart people to solve problems that haven't been solved. And they've found a lot of companies or I should say invested in a lot of companies in this area. And so we're lucky enough to be one of them.
Joe Colantonio: Nice. So what's it like pitching in front of there? What was your main. I guess you have a certain amount of minutes. Not a long time. You need to really hit it. So what was your killer pitch that got you in?
Chris Navrides: It was actually just a very fast ten-minute interview. I've never been grilled that hard. They just pepper you with questions about why you? Why now? Why this market? How are you going to be different than the last companies? All of those kinds of questions to just really see like, hey, are you going to persevere? Because I'm sure you've seen this Joe. You started your business and you get told to know a whole bunch. It's hard to get that initial traction. They want to find folks that are willing to commit, willing to be mission-driven. And so this is something me and my co-founder Etienne are very passionate about. I worked with him at Test Lead as well, and we're both very passionate about this space, and that part I think came through. And I think talking to and talking with some of the other founders, especially in the testing space, you tend to see that as well.
Joe Colantonio: Now there's a lot of tools like a head up at least twice a week by a new vendor. That's a codeless AI-based solution. First of all, you've worked for an AI company. Is this legit? Like when people say that AI, all companies using real AI and if they did why? Why wouldn't they get well the market share it really was AI machine learning. I don't know if that makes sense, but like.
Chris Navrides: I think so. Let me take a shot at answering it. I think the way I view testing is that all testing, regardless if you're talking API, manual, UI testing kind of breaks down into four components, right? So you have selection. So I'm like, UI, that's element selection. Building a selector, you have an action. So I click an API call whatever you have verification. So these are your certs, this is checking whatever you want to make sure that happened, happened, or didn't happen. And then you have sequencing. So what is the order of these? And so a lot of these companies, depending on where you look at them, they apply AI to various areas. So if you start to kind of look at these vendors, I'll just kind of throw out a few. Like Applitools is just AI verification at the UI layer. And so there's all these different kind of permutations then that go in there testing. I try to do all sorts of these pieces with AI and various vendors to it. I'm not going to claim to be an expert on all the different products out there, but they all try and apply AI in these different four areas, in my opinion, to varying degrees of success, just because there's solving the problem and then there's productize it and making it useful, accessible to everyone. And those tend to be two separate challenges and require sort of two different thought processes around it.
Joe Colantonio: Nice. So you did start a company called Smart Driver. Is that the name of the company? No, it's DevTools.ai
Chris Navrides: Yeah.
Joe Colantonio: And the solution is Smart Driver.
Chris Navrides: Correct.
Joe Colantonio: So what's the deal with Smart Driver? What's that all about?
Chris Navrides: So, similar to like I was saying, there are those four different areas. We're really just looking to apply AI to just one area, which is element selection for UI-based testing. We kind of think, hey, AI tooling right now, especially in the visual field, the recognition aspect of that is actually probably the most furthest ahead in terms of, hey, I feel there's a lot of research, there's a lot of open source work around it. And so us being able to apply that in the domain of just say, web testing to find your element that you want train specifically for your application. We think we can do that in a very smart way where we basically say, hey, if your selector fails, we can go ahead, take a screenshot, look at it like a human would apply visual AI and actually locate the element and then return it so your test can continue to pass. And the whole rationale for that is, hey, if it looks the same to an end user and you're doing a UI test, it should probably pass. Your underlying object shouldn't be the blockade for your test pass if it looks the same to you.
Joe Colantonio: So does it learn over time? Because I'm just thinking back on that. We used to say, all right, if you can't find the identifier, it's to the right of this image or to the left of this. How different is that from that type of logic?
Chris Navrides: So it's similar. I think it definitely takes training data within it. So the more training data the better, especially if it's very good training data. And by good, I mean you're realistic. So if that button tends to move, it'll start to learn those properties of where it is. There are certain aspects that AI still has a hard time with, to be very frank. So I like to call it the sort of Excel problem. So if you're trying to say like, I want to select this item on the grid and it's all blank cells except for, the row headers and the column headers, then how does it find it? And so all of those cells look the same visually. And so, yeah, this is kind of where we're trying to go. I mean, it's not solved right now and I don't see this solved anywhere. But this is a common issue where it's like you have to kind of figure out what's interesting about this particular cell is it's, say in the row, first name and the column of last name or something like that or what have you. And so that is how you have to figure out that that cell is the interesting one that the user wants. And so sometimes that takes a lot of data. But I think in general, the way I think of AI testing versus traditional testing is when you write your test case with the traditional test framework. So Selenium, Appium, Cypress, what have you. The best your test case, in my opinion, will ever be is the day you check in in that hour. Right? Because you have locked that state, you've pulled the main branch, and you're fully there. You know all the variables and it runs and then it can only get worse from there as people, add more to your tests, add more to your selectors, add more to the product, add more feature flags, etc. It only gets worse. AI kind of flips that on its head. AI tends to get better as it progresses. So you may have to add a few samples to get it to learn what this object is or what these elements are on the screen. But after it learns that, guess what if it? If there's a small variation, right, it moves to the right of this object, it can still understand that you're looking for, say, the shopping cart button or the search icon button or what have you, because it's seen enough of those examples throughout your application that it knows generally what that is, and that tends to be closer to what humans do anyways. If you hired a new tester for your team today. They're not going to be that great to start with. But you give them a couple examples. They learn they get better, and then they become a really strong tester. That's sort of the promise of AI.
Joe Colantonio: Nice. So how does one get to use Smart Driver? Can give an example of how to use that like, say, an existing framework?
Chris Navrides: Yeah. So right now we support Selenium, Appium. We'll have Cypress out here will soon and then we're going to continue. But essentially you just install it with whatever framework package you're using today. So npm, pip, maven, and then you import our package and then you wrap your existing driver. And then what we do from there is we basically wrap all of your selector methods in a try-catch with that, then we go ahead and as soon as it fails, we basically kick in. We take a screenshot and we look for that element visually. We do that on our back end and then we return it and then we are turning around it and it works just the same. So it's the same element, finds it on the screen, locates it. And so if your tests you build really great selectors, awesome. We have no real Perf Hit to you. We're not kicking in, we're not doing anything. Really it only kicks in when there is an issue. And so any Perf Hit you would see, you'd already have a Perf Hit because you couldn't find that element on the screen. And so we're trying to be just purely additive and purely just a backup helpful solution folks.
Joe Colantonio: How much time does it add? If it does fail and it goes through this process?
Chris Navrides: It takes about two and a half seconds. So most frameworks, a large bottleneck is just getting the screenshot itself. So that's about a second. Then it has to upload it, run the inference on the machine learning model and that takes another second or so and then just kind of all the underlying stuff. So putting them the element on the page, given the coordinates and things like that.
Joe Colantonio: Cool. So an automation engineer they know why this is cool. But what is the main benefit you get from using Smart Driver?
Chris Navrides: It's actually a really good question. So I think there are a couple of benefits. So I think the first one is that you're not going to have to spend as much time meeting tests. So as tests grow and mature and sort of your framework, things change. So that could be maybe 80 flights going on with certain elements within your test, and so you would have to go in, put in if else type of check if this element exists, else if check, if this element exists and for every single sort of permutation. What's great about Smart Driver is it will actually learn from that automatically. So you don't have to change your test code, you don't have to do another PR, it'll just learn, oh, hey, this button can be red or blue, depending on which experiment it's grouped into. You don't have to modify your code if it still looks the same. So the experiment is just maybe you're changing frameworks. And so the page is being rendered with speed on that and you're switching to react. And so the page object models are changing, but it looks to send your user, you don't have to change your test framework. So that's the big issue or the big time savings that we have. And we've talked to a lot of companies. As they've grown their test framework over the years, their team has grown and their team is now spending more and more time just maintaining tests. And so we'd love to have those folks be able to not maintain tests as much and be able to kind of use this as an assistant to kind of, hey, let's keep the test green and then let's give some data about, hey, these tests are flaky. Here's some selector options that maybe could work a little bit better so you don't even have to use our service, things like that. We have a secondary benefit as well that we're trying out, which is if you want to just skip building selectors at all, you can just use the AI. So you just we add an extra driver function to the methods of find by AI and you just pass in sort of a human readable string. You go into a web UI and you can basically teach the bot, Hey, this is what a search icon is, they're a menu icon, or what have you. And then you don't even have to build a selector to they can.
Joe Colantonio: That's cool. Is that newer or has that been tried and true proven?
Chris Navrides: I mean, it's the same logic. It's just it's the same thing that happens and that the catch exception when your thing fails. So if you want to just not have to build a selector to begin with and just use the AI, you can use that. But the method itself has been proven. We're using a similar technique at Test.ai and this is also similar to a lot of open source work that we're kind of standing on the shoulders of giants and that's most of AI out there is doing this where you can kind of leverage what folks have done in the field at companies like Google and Facebook and they open source sort of like finding a dog on the screen. And large picture is kind of similar approach to then finding your submit button or your login field. They tend to have very similar overlaps. So that's what we're able to leverage is all the research and all the really smart folks at these large companies.
Joe Colantonio: Nice. So you come out of test.ai. Test.ai was doing a lot of things. So when you start a new company, how do you know that this is the sliver that you're going to start with? Is it something that's manageable or is this something that you see, was the biggest pain point, most like 80% of people using maybe test.ai were struggling with.
Chris Navrides: I think after Test.ai, Test.ai had a lot of really cool technology, a lot of great people, and a lot of great customers. And what we saw there was, a lot of folks were hesitant to jump into a full fledged framework that would have this kind of problem with, say, rip and replace or you have to start over on your test automation. And so with that, I think. It's a little bit harder to get folks to try it out, use it because they have to redo all the work that they've spent years, months, and weeks building. And so what's different and what we thought afterward is, hey, what if we can just go ahead and meet folks where they are today and say, hey, let's just try and help you be more efficient, not have to spend as much time maintaining your tests. And what would that look like? And if we could apply some of this technology and these thoughts, take the latest and greatest that's out there, do some research on what's been changed in the last couple of months, even in this field, and then apply that. And so our thought was, hey, let's go ahead and just integrate with these frameworks. So then guess what? You only need to change two lines. You have to import our stuff and wrap your driver and you get these benefits of A.I. without having to do all this hard work. You have to rewrite your tasks, etc.. We just go ahead and can find a way to learn from your existing tests and then build back up selectors for us.
Joe Colantonio: It's almost like a bot approach again, a simulator, one function at a time. So eventually, like, oh, my bot.
Chris Navrides: Exactly, exactly. I mean, I think the key thing that we thought just chatting with some folks early on is, has been, yes, if we can just learn from your existing tests what you're using, then guess what? You can automatically jump ahead as opposed to having to have someone go in and reteach the bot. And so that's where we've tried to keep it as simple as possible and just pure value added.
Joe Colantonio: Great. So just curious to know with your experience in AI, you're speaking to all the hipsters now, probably in Silicon Valley. What's the future you think of AI? Some people have opinions like, oh, it's just it's not truly A.I. anyway. And it's never going to able to do anything that. That great. But do you see it more as like working with people to bubble up insights or is it more like it's eventually going to take over more and more and more of an automation engine that's probably already doing?
Chris Navrides: I think the best place that it would be is, it's going to be a tool that automation engineers can use to kind of be a force multiplier for themselves. And so it's not necessarily going to replace testers or automation engineers, but it's going to allow them to focus on the heart problems and really allow them to scale themselves and have more impact across the company in the organization. And I think, the kind of key things to think about here is, a lot of times when people think A.I., they think like, oh like when it reminds me, I guess, of the same thing when ATM machines came out and everyone's like, Oh, the tellers are going to lose their jobs. And but when you look at it, historically speaking, there are now more tellers than there have been ever. And ATM machines are prevalent. And so it's really a win win type of scenario. The other kind of thing to think about, too, and is AI is fundamentally a data game. And so you see A.I. being applied to the product side. What you see is like kind of new positions where it's like, oh, these data engineers come in and they're basically, Netflix is a great example of this, but you have these data engineers who are looking at all the data and helping train the AI versus content selection. It's not replacing the software engineers who are building all the infrastructure on top of that to make Netflix great and stream and all those aspects. It's just sort of an augmentation. So it helps the end user, the data engineers tend to be still software engineers who are writing a lot of code. And I think the same thing will be happened with automation and test engineers as there's going to be these folks who look at data and these data signals could be product insights, right? So where are the paths that users are going through your apps? Right? And then instead of having like metrics around code coverage, it would be metrics around what percentage of our user flows today are covered with automation or covered by this build. So if we can say 95% of the user flows within our application are now covered with automated tests. That's a very true and interesting signal to upper management and the team in that like, oh, cool. There's only maybe 5% of folks who might see some issue, hey, how can we increase that? Or how do we augment that automation with manual testing or some other sort of testing? And you can start to use that. You can start to leverage data around, Hey, what are core outages that we've seen in the logs? And can we replicate those and try and permeate them like maybe service A failed, but service B, then what happens if A and B fail at the same time? Does that cause a catastrophic or? Yeah, you can start to leverage some of these data aspects as you test. And that's, I think sort of the future where and machine learning can help do that and help scale that, but then it allows that automation is here to focus on so that 5%, that 10%, the new feature that's coming out instead of maintaining and making sure that we didn't break anything with this release, they can say this new feature is really ready for end users and customers to be used.
Joe Colantonio: Cool. Is anything AI can maybe find that a tester would never be able to find? I've been reading a book. They used A.I. to discover a new vaccine. They just did parameters and it just ran through like a bunch of things, and they found things that a human wouldn't even make the connection. It was able to find connections that weren't even thought of. Is that a possibility here? Is that you just need a lot of data for that type of AI in the testing would totally not fit that particular parameter?
Chris Navrides: No, it's the cutting edge. I think at some point there it might be that way just because creativity in terms of machine learning is hard, because right now almost all the algorithms, all these machine learning pieces are all geared towards actually saying, hey, what is going on in the state of the world? And let me replicate it. And that extra creative stuff that humans are just very good at of like the what if questions? Those are starting to kind of be tackled today. So I think my concern or my what I've seen in sort of this, I think is which is interesting, is we have things like monkey tests, we have chaos monkeys, things like that, do something similar. The issue was then I think on that validation aspect of that. Right. So how do you validate and say like say your monkey funds, say this AI system goes and tests all these crazy permutations of all these test cases. There becomes this kind of so what aspect, right? So if it finds an issue where say on a UI monkey taps the application icon 20 times, then it crashes. Okay, it's probably a valid bug. I mean, it's replicable. You can go ahead and redo it and see it. But then you look at the logs and say, how many times have we seen the stack trace of a crash? And maybe it's never maybe it's one time, right? Because people don't tend to click on something too many times. So there's cost to this sort of this balance of validation. And I think that's going to be the main issue. And that's where I think test engineers who understand the product very well and the end user and they're going to be the ones who would have to kind of sift through and learn from that data. But I think, again, it's sort of goes back to that whole thing of it's a great tool to help force multiply that test engineer. Now, that test engineer can say with confidence, we've tried, you know, 20,000 permutations on this app. I've looked through the errors and these are the top errors that I think are realistic and kind of bring that in. But I think the realistic aspect, right, is the key point of that sentence, which is, it has to be realistic and applied to that person or that application or that company. And those tend to be very different from company to company to company. Yeah, I think what if you look at say what Lyft cares about, it might not be the same as what Uber cares about. And there are obviously some things that they both care about, like making sure you can order a ride. But then there are certain just aspects of they got in trouble for X or Y and they need to make sure that it doesn't happen again. So I kind of I've always like to say that there are a couple of different types of tests. One is your core user flow tests, what are your users doing today? And then there's that, CYA test where it's like, hey, maybe upper management goes to jail if you screw this up, things like we always joked but it was Korean age verification. So if you do purchases in Korea, you have to verify that the person is, I think, over 16 or 18 years old to do an in-app purchase. And so if you don't do that, you're breaking Korean law. So it's like, hey, we always made sure to have that test, even though that code doesn't ever really get touched. We want to make sure that that doesn't happen because we liked our managers and we didn't want them to go to jail. So but those are things that it's industry slash company-specific. And that's again where I think having those testers and those automation engineers be able to leverage and use that tool will be that key benefit.
Joe Colantonio: Nice. So someone wants to get started with a smart driver. What do they have to do?
Chris Navrides: So we have documentation and so you can kind of check it out. But essentially it's you pip install or NPM install or maven install it and then you add it to your existing test automation and then you sign up for an account, get an API key, add two lines to your existing test script, and then you should be off to the races.
Joe Colantonio: And currently, it is. There's no charge as of now, correct?
Chris Navrides: Correct. So we're looking into what maybe makes sense, if anything, for charging. We're big fans of open source. We have open source. Our actual SDK is more looking into open sourcing our back end so someone can bring this up on their own. It's just, yeah, right now we're iterating pretty quickly and so we want to make sure that we have good user experience in terms of bringing up their backends themselves. We're also looking at just rolling this out into the cloud marketplaces. So then you don't have to worry about bringing it up yourself. You can just deploy an instance through the AWS or G Cloud Marketplace and it'll have the entire backend for you. And so no data would leave your company system. It just right now it's early days and so we're working on it. Our main goal right now is really just to engage with the community and say, hey, how can we help you? We got YZ funding, which is great. And so we want to just find ways that we can help the community and sort of in the same Google approach, we'll figure out if and how we can sort of monetize it later, but we're going to keep open source as one of our top priorities because we're ultimately, at the end of the day, mission-driven on this. And so, hey, if someone else can figure out how to monetize this and then it moves the industry forward, then, hey, I would still call that a success. Our investors might not, but I would. So, again, I'm happy to engage, happy to hop on a call with anyone and discuss because I think, this doesn't get discussed enough, which is why, thank you, Joe, for doing this. And yeah, there are a few other folks in the community, but it's, the sharing of ideas is, I think what helps elevate and get us to that next level. And get to these next steps in the community. Just what's the next sort of cool thing? What's going to help all of us in our lives not have to maintain these tests all the time?
Joe Colantonio: Absolutely. I think sometimes fields shut it down. AI is not real. And there's no such thing as automated testing, manual testing. And it just gets all kind of weird at that point to come and have a discussion. So thank you for coming on and sharing more about it because I think the community needs to know because I don't think you can just ignore it. I know back in the day when I was doing automated testing, people say automated testing can help, and also they came along and I think it's the same thing with the AI for some reason there's resistance and but over time I think it's inevitable that it's going to really become part of every tool, probably, I would assume.
I tend to think so. I mean, I think, in general, people tend to have different mentalities and different ways. They view things through their personal lenses. And, I don't think anyone is right or wrong. I think the right answer tends to be a blend right. I don't think everything should be 100% automated. I think that there's some stuff that a human eye is really great for being able to look at a product, look at a user scenario, user flow from all of your experience of understanding your customers is one of the beautiful things about test engineers. They really are the advocates for the customer. And so, that is really hard to automate and simulate. And so, I think you need to have a very well and good balanced blend of those. And eventually, I think the sort of long term approach that some people would take that AI is going is, you could start to just build giant A.I. robots of these various people and personalities. So at some point, maybe there's going to be a Joe robot, maybe there's going to be a Jason Arbon or Michael Bolton robot. Right. That can kind of just go in and you can download an A.I. brain of Michael Bolton or Jason Arbon, and they'll go ahead and test you up like how they would. And yeah, that would be, I think, a very interesting future. I'd love to kind of see that and just play with it fiercely.
Joe you are very polite.
Chris Navrides: Exactly, you've already started.
Joe Colantonio: Okay, Chris, before we go, is there one piece of actual advice you can give to someone to help with their automation AI testing efforts? And what's the best way to find contact you or learn more about Devtool.ai?
Chris Navrides: Yeah. So if you check out our website dev-tools.ai or send me an email Chris@dev-tools or we also got the domain because it's hard to remember of get devtools and aidevtools.com. Feel free to reach out. We also have a discord community discord a free slack alternative. And so we have a community there. But I think though in general, just AI in general, for anyone who wants to start, I think it's the barrier to entry in the last couple of years, it's just been dropped so low that anyone can start to play with these things. So I think there's it's made by Google.com or something like that made by Google, but you can start to basically just build AI-based models without any sort of expertise or training. And you basically just say, here's a picture of three pictures of dogs, here are three pictures of cats, and it can build an ML model for you that can look at it and determine if it's a cat or dog. Right. So I think, if you want to play with it, you can start to play with those today. I think there are a lot of open source and free tools out there as well that you can check out. And there's, I think also the A.I. for software testing community that is focused exclusively on this and sends out emails. And so that's a good, good resource as well. And just being open-minded and trying, I think there is a lot of that's out there and some of it may not apply to your particular need or might not work for your particular use case. And some of it it might work really well. And so it's I think worth at least just honing your craft and trying things out and that continued self-development is always useful. And so trying to find some time to just play with all of these tools and there's a lot of them out there not just ours but a lot of great tools, companies, and people in the industry trying to help solve this problem.
Joe Colantonio: Thanks again for your automation awesomeness. The links everything we value we covered in this episode. Head in over to testguildcom.kinsta.cloud/a410 and while you're there make sure to click on the try it for free today link under the exclusive sponsor's section to learn all about SauceLab's awesome products and services. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe and my mission is to help you succeed with creating end-to-end full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
Thanks for listening to the Test Guild Automation Podcast. Head on over to testguildcom.kinsta.cloud for full show notes, amazing blog articles, and online testing conferences. Don't forget to subscribe to the Guild to continue your testing journey.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Brian Vallelunga TestGuild DevOps Toolchain

Centralized Secrets Management Without the Chaos with Brian Vallelunga

Posted on 09/25/2024

About this DevOps Toolchain Episode: Today, we're speaking with Brian Vallelunga, the founder ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Testing Castlevania, Playwright to Selenium Migration and More TGNS136

Posted on 09/23/2024

About This Episode: What game can teach testers to find edge cases and ...

Boris Arapovic TestGuild Automation Feature

Why Security Testing is an important skill for a QEs with Boris Arapovic

Posted on 09/22/2024

About This Episode: In this episode, we discuss what QE should know about ...