About This Episode:
Today, Gaurav Mittal, an expert data science manager with over 18 years of experience, joins the podcast. In this episode, Gaurav shares his journey from manual to automation testing and delves into AI's revolutionary impact on software testing careers.
Add visual checks to your tests now: https://testguild.me/vizstack
We'll explore how open-source AI models and libraries like TensorFlow and Keras make powerful tools accessible without the price tag and discuss the crucial role of retraining machine learning models to adapt to dynamic data.
Gaurav will highlight the substantial benefits of automation in categorizing emails and its time-saving prowess. We'll also uncover the advantages of shift-left testing with AI, enhancing efficiency in the CI/CD pipeline and fostering collaboration among QA teams, developers, and project managers. Moreover, Gaurav offers a comparative insight between Selenium and the newer Playwright, advocating for the latter's superior performance.
Throughout the episode, Gaurav emphasizes the practicality of AI as an assistant rather than a necessity in automation efforts. He also provides hands-on advice for integrating open-source AI models into your processes. Stay tuned for actionable tips and incredible insights on utilizing AI to elevate your automation game—all this and more, right here on the TestGuild Automation Podcast!
Exclusive Sponsor:
Visual bugs aren't merely cosmetic—they're silent revenue killers. Consider this: 88% of users are not likely to return after a poor experience, and nearly 40% will abandon a site if images or layouts don't load properly.
With the explosion of browser-device-OS combinations in the average test matrix today, each undetected visual issue becomes a missed opportunity and a potential customer loss.
Every glitch could mean hundreds or thousands of dollars in lost revenue and diminished trust.
Here's a common myth: functional testing can catch visual inconsistencies. The reality? Plenty of visual bugs still make it into production after functional tests pass. The number of functional tests would be endless to ensure visual test coverage.
Most modern teams need more dedicated QA specialists and rely on CI/ CD. With high release velocity, there's no last line of defense for visual bugs, and manual checks aren't practical. As a result, visual bugs remain undetected.
Design-focused teams at scale use component-based design systems. Without component testing, there's no way to know if updates break the UI or affect critical pages. Moreover, responsive tests are complex due to device fragmentation.
Trusted by Canva, Sainsbury's, and Intercom, the BrowserStack visual testing suite is a one-stop solution to ensure pixel-perfect UI.
While App Percy helps you perfect your mobile apps visually, Percy helps you with manual testing on websites and mobile browsers. Visual Scanner helps automate visual testing across websites and web apps.
The AI-powered visual engine sets the BrowserStack Visual Testing suite apart, which enables the team to focus on the most critical visual changes and discard the noise.
Moreover, you can test across 20K+ real devices and different browsers.
You can also integrate across the BrowserStack product suite for automated cross-browser testing across websites (Automate) and automated mobile testing ( App Automate).
Set up quickly and integrate easily with tools like GitHub, GitLab, and Jenkins. You can get started in minutes—no installation is needed. Just add one line of code to your test script.
Get started with parallel testing to scale up as your test maturity expands. Enable UI component testing for design-centric teams. Collaborate effortlessly by reviewing, commenting on, and approving snapshots while keeping the team updated on the status of pull requests.
With BrowserStack Visual Testing, you'll ship flawless, pixel-perfect UI faster. Give it a try: https://testguild.me/vizstack
About Gaurav Mittal
Gaurav Mittal is an accomplished author and international speaker, recognized for his published articles, including Implementing Email Attachment Security in Informs ORMS and Time-Cost Effective ML Model Deployment Using AWS Lambda in Informs Analytics magazine. He has spoken at global conferences and served on several judging panels. In addition to his professional achievements, Gaurav actively contributes to non-profit organizations through volunteer work. Outside of work, he enjoys spending time with his children and playing sports.
Connect with Gaurav Mittal
-
- Company: www.GauravMittal
- Blog: www.gauravmittal1985
- Twitter: www.GauravM85
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
tgaGauravAIImpactinSoftwareTesting523.mp3
[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
[00:00:34] Joe Colantonio Hey, how will AI impact your career in software testing? Well, you're in for a treat if you don't know because today, we'll be talking with Gaurav all about it. All about AI, software testing shift left, all the things, really excited having him on the show. He's a seasoned data science manager at a large, large enterprise. He has over 18 years of experience and he's really helped a lot of teams develop and deploy cutting edge tactical solutions. He's written a bunch of different blog posts I'm going to highlight here as well. So if you want to learn about the impact of AI and software testing, you don't want to miss this episode, check it out.
[00:01:08] Joe Colantonio Visual bugs are silent revenue killers. Do you know that 88% of users won't return after a poor experience and nearly 40% abandon a site if layouts or images fail? As a tester, functional testing alone can't catch these issues, especially with today's modern, complex browsers, and device metrics. That's where BrowserStack's visuals testing suite comes in. Trusted by Canva, Intercom, and more, and it shows pixel perfect UIs with AI powered visual testing across 20,000 real devices and browsers from app Percy, from mobile apps to automated cross browser testing. It's a one stop solution for a flawless UI designs. Integrate easily GitHub Jenkins and more. No installation needed. Start shipping perfect UIs today and see the difference for yourself by heading over to TestGuild.me/vizstack or use a special link down below.
[00:02:05] Joe Colantonio Hey Gaurav, Welcome to the Guild.
[00:02:08] Gaurav Mittal Hi Joe, thanks for having me. I am a big fan of you and the podcasts. Like, they are very knowledgeable.
[00:02:15] Joe Colantonio I love that. Thank you so much. I appreciate it. I love the knowledge you've been dropping lately as well. And I'd like to dive into. I guess before we get into that, though, as always, curious to know, how did you get into automation testing?
[00:02:26] Gaurav Mittal Automation testing like right from the very beginning of my career, I was being put into testing. I love finding the issues and making sure what lines we are parsing that sure to be of robust quality. Now, eventually what happened? This manual testing was taking a lot of time and automation testing. It just started creating the waves. It was like white box testing where you were supposed to write some piece of code to make sure your manual testing efforts they can be produced. And that's how I got fascinated about it. I love coding. I started my career with Java and like code like Selenium was there. So I have worked a lot on Selenium Cucumber. I learned recently from past 3 to 5 years. It's like AI wave. How AI can be utilized and do the automation testing and I've written several automation utilities provided some innovative solutions to the organizations that were very successful. This has been my journey from manual QA to automation and now leveraging AI in automation and some approaches.
[00:03:34] Joe Colantonio Very nice. What are some of those automation utilities?
[00:03:39] Gaurav Mittal Automation utilities like couple of those that I would like to talk like one of them is like how you can categorize the images? For example, like most of organization they are service based organizations and they provide their services to product-based companies. This product-based companies, they then have a lot of their end customers. If service-based organizations, they are making some changes. These announcements may cause trouble at the end clients. Now, they may start raising the production incidents and this production incidents, they pop back to the support team. They have to manually look into each and every email. They have to figure it out what is the category? What is the type of it? Is this an incident or say feedback or it's an enhancement request. And then they have to pass it to their respective internal team where QA and Dev team they have to look over the incident. Now, this is all one manual activity and it has chances and error also. For example, supporting they have done wrong analyzes. They are passing to a wrong internal team. Now this support activity has a very small SLA because these are products and incidents. Suppose if it is a P1, you have only 3 to 4 hours where you have to reply back. I have provided an innovative solution here where you can use AI/ML models like text classification, which can read the email content, then find it out. Okay. What is the category of it? What type of incident it is? And the second thing is so use image classification models because these end client emails that comes with a logo. And this logo you can fit to ML model deep learning ML models like Keras and TensorFlow library. What it was? This utility is having two part one is their text classification which is providing you which line it is originating, what is a category. And the second part is image classification. Which is providing the confidence that, okay, which client we are talking about? Loving both of these. If you are getting the same client, your automation utility is giving you the correct output. And if you see what is the advantage of it, we have drastically reduced the manual effort which were required and the supporting team. And later QA was acquired to prize the incidents manually. This one automation utility is like very helpful for several team members.
[00:06:00] Joe Colantonio I love this because a lot of times people just focus on functional automation. And this sounds like an area where you had a lot of manual things like, Hey, we can automate this. A lot of times people don't get involved in that business or find those opportunities or speak up. How did you find it? And then speak up and say, Hey, actually we can automate this and how do you get the okay?
[00:06:20] Gaurav Mittal Yeah. So actually it was being brought to my attention by one of my senior director. He said, like, this is what's happening as support team is required to invest a lot of time. Then, we automate it. And when I have started looking over it. I thought, it's so simple utility, emails are coming. There are some open source python libraries like Beautiful Soup. So you have to just parse the text find that line, parse it. The problem was like, for example, Walmart. Walmart is having headquarter in USA as well as in Canada. Okay. From the email text, they say only Walmart. How you will differentiate that, okay. This email is for Walmart U.S. The customer from Walmart Canada customer. From naked eye, you cannot differentiate a logo because the logo has same color from the naked eye. But if you use Python libraries, it's a different. And that's where I used AI model image classification to figure out whether this like Walmart U.S. or Walmart Canada. This was an awesome utility. Yes, it was being brought to me as an automation task. But later it turned out to be automation plus AI.
[00:07:26] Joe Colantonio Well, and just some folks know, we'll have a link to both these articles down below to take a deeper dive. All right. How did AI get involved then? How much did you need to know AI because a lot of people like I know automation. I don't know AI, I have to pay for it usually because it's AI like how did that? Can you talk a little bit more about the AI piece. What is it? What did it cost and how hard was it to get up to speed with it?
[00:07:48] Gaurav Mittal Yeah. AI like it's everywhere. Everyone is talking about AI. Everyone knows AI. The point is, like if you have some piece of work, you are adding some automation utility. It does not mean that you have to use AI, AI is just an assistant. It can help you to some extent, but it cannot give you their desired output what you are looking for. For example, like AI is now impacting shift level approach. Earlier, QAs are part of traditional testing. Okay. Well, they are not getting enough visibility. They were being included only in the Dev phase. And when the developers, they are done with their code, then they want to starting their testing. Now, with a shift left approach, QAs, they are getting enough visibility. They are starting from the very beginning, whether AI also included with the shift left approach, the locators what QA was trying to find manually. Now they can find it through AI intelligent tools like our Tosca vision AI Applitools. What I am trying to say, you don't need to like to start thinking like from day one that, okay, you want to leverage AI, no. Just go with the flow, start writing the automation utility. Just think what are their tasks. And if you are finding any opportunity for the AI in between where ML models and easier work, then you can implement them.
[00:09:10] Joe Colantonio I guess, where do you find these models though? Are they paid solutions? Like if someone's like, okay, I probably use AI. I mean, how do they know even know where to look, I guess?
[00:09:19] Gaurav Mittal No, it's not like these models, they are always paid. They are so many opensource models that are readily available. For example, for text classification, we have natural language processing and there are several libraries which are like freely available. You can use them and you can write an ML model. Similarly, for image classification, Keras, TensorFlow libraries, they're freely available, you can use them. It's not like you have to always go for paid solution. There are opensource solutions which are very easily available and you can refer Kaggle. It's a competition website. Some recent examples, you can see where you will find all the uses of kind of models.
[00:10:04] Joe Colantonio Love it. I know also like so when you talk about emails, I assume like there's a lot of different formats, a lot of different brands, the constantly changing. How hard was it to create it so that the automation is going to be reliable? You don't have to keep maintaining it?
[00:10:19] Speaker 3 Yeah, so that's a good question. And like whenever you are using AI, you have to keep retrain your ML models. The reason is like you are being given one particular data set initially like I have trained this email categorization utility on 30 plus clients. But over the time what happened like of Walmart I have given you an example of U.S. or Canada, Walmart in Mexico is also included now and Walmart Mexico is sending emails in Spanish language but you have not considered it before. Okay, so your AI model will fail because it is not able to understand that language. Okay. The answer is retraining. Keep looking why ML model is failing, then retrain your ML models and deploy them again. And that's how you can increase the efficiency accuracy of your ML model.
[00:11:10] Joe Colantonio All right. Based on the work you have to do to keep it up to date compared to what it was done manually, was it worth automating?
[00:11:17] Gaurav Mittal Yes.
[00:11:18] Joe Colantonio In the long run?
[00:11:20] Gaurav Mittal Yeah, it is. Because reading the emails manually, suppose I tell you like 1000 emails you are getting, you are spending several hours. You have to first read the email manually, then analyze it. Okay, what is the category of it which team you have to send it? Now you have all automated. The automation utility is giving the output directly to this particular team. This is the issue. It's an incident, feedback, enhancement. What should be the category of it? Is it a P1, P2, or P3? All this information is handy with you. No manual effort is needed.
[00:11:52] Joe Colantonio Now, just to make sure it is performing correctly, how do you know? Do you use any metrics or KPIs that evaluated over time to say, how is it doing or even to show your manager, Hey, look, I save x amount of time this quarter, so give me a raise.
[00:12:06] Gaurav Mittal Yeah. So automation like to depict your automation efforts. It's like generating the KPIs. Okay. And like the effects like category or like how much time? Manual versus automation efforts. So yes, created a dashboard to depict all this easiness as well as how much time has been saved out of this automation utility.
[00:12:26] Joe Colantonio Love it. All right. So you also mentioned shift left, shift left has been a concept around for a while. How does AI fit in now to shift left testing?
[00:12:34] Gaurav Mittal Yeah, shift left. It has been a decade old and it's a boom. Like most of the organizations, they have found value in it. And like during the shift left, QA is now being involved right from the requirements phase. QAs having the visibility and as soon as like the product order they give you the whiteboard, diagrams, mock ups, QA can start writing their test scripts and apparently Developer is writing their Devscript. Now, what it was and the developer, they are being done. They have deployed their cord. They will provide you the actual URL and QA will update their test script. There is still manual effort needed because now QA has to update the locators or this actual UI. With AI in picture, there are some intelligent automation tools available whom you feed this mock ups or whiteboard diagrams on the fly. They will create this locators and tomorrow when you have actual URL available just to change that URL, no need to update the locators. It is reducing the manual effort so required by the QA. Secondly, we are ready with the automation script. Like once the developer they are being done with their task, QA is also ready with their automation script, they can find the defects and the Dev they can now work parallely on working on rectifying this code. Other bigger advantages. If we look on a bigger picture, a CI/CD pipeline is running. Whenever a developer is raising a PR, they are a behind the scenes they are running their automation test scripts. If automation scripts are failing these AI tools on the fly, they can update the locators and make sure we are seeing more green dots in the CI/CD tools. There are less failures because of flakiness of the test. Overall this AI tool help in like parallel work. Secondly, like CI/CD, like it's helping us are reducing the time and looking over the why the pipeline is failing. These are some key benefits. And one last benefit is, QAs is gaining more visibility. QA's now collaborating with the project managers, product owners, developers right from the very beginning, like from the deployment phase.
[00:14:51] Joe Colantonio Love it. All right. So once again, is this an open source solution you created like a framework or is it a framework that includes paid solutions?
[00:15:00] Gaurav Mittal For this, I have used a paid automation tools and with that paid automation tool, this AI feature was free. It was open source.
[00:15:13] Joe Colantonio Let me make sure I understood the first part. It sounds like you had an image like a Figma image or even like a UI/UX, it's able to read the image so you can start testing before actually having or at least have your test in place ready to go or your test case before the actual application is developed. Did I hear that correctly?
[00:15:30] Gaurav Mittal That is 100% correct. Not only like Figma diagram, even if they are drawing some of whiteboard diagrams, some discussions are happening, you just take a photo, it's an image and just feed it to AI automation tools. And these tools, they will find the locators, they will help you in creating the automation test script, and these locators, they act like id in Selenium. They are quite stable, they work on cross-browser cross-platform. That's why a CI/CD pipeline is also working fine.
[00:16:00] Joe Colantonio What's the solution to using for that?
[00:16:02] Gaurav Mittal What's the solution? We are using like we, I have worked on Vision AI tool.
[00:16:06] Joe Colantonio It's an open source Vision AI Tool.
[00:16:08] Gaurav Mittal Yeah, I have use Vision AI, but I read there are some other tools available as well, like Applitools. In the market, there are several tools available.
[00:16:16] Joe Colantonio Nice. Nice. Very cool. So could you also use it before they develop to look at a UI/UX to say hey, this is not a good flow or is that too general?
[00:16:26] Gaurav Mittal No, this is visible only after AI, AI has come into the picture. These automation tools they have enhanced, their features, they have included AI. Now, with this AI, basically, they are performing image classification, they are reading the images, parsing it, generating the locators, all these is possible after AI has come.
[00:16:49] Joe Colantonio All right. Just speaking with other testers, they seem to be getting a lot of either resistance to implementing a new testing approach or there's certain testers is like AI's garbage and that. But obviously, you have experienced automation for 18 years and you're creating automation solutions that aren't just paid solution, those also open source. What do you say to people that think that?
[00:17:11] Gaurav Mittal I would say like AI is not garbage and it is not a complete solution. Just consider it as an assistant, but ultimately, it's you who are going to drive for a robust production delivery. AI can help you in enhancing your test suite, or making sure your test they are more stable, but it's you who have to apply of them to make sure your scripts they are more robust and there is less flakiness. So you need to drive your automation test.
[00:17:45] Joe Colantonio Nice. I know as anonymous engineer, a lot of times, time is spent triaging failing test. I think I heard you say this, but maybe I'm wrong. Do you have a solution that actually is able to analyze failed test results and triage them automatically kind of like the email able to bucket things? Real issue, not an issue. They're all due to this one issue. Fix that one issue. Fix 99% of the test type deal.
[00:18:07] Gaurav Mittal No, I have not worked on such kind of thing. Like issues are there and probably like if they are production issues then it's best to look at manually only rather than relying on some tool, whether they are using AI or not. But to be honest, I have not worked on any such utility as of now.
[00:18:27] Joe Colantonio Nice. Do you have any like stories before the shift left with the AI and then now like?
[00:18:33] Gaurav Mittal Yeah, like before shift left, what was happening? In the sprint discussions what's happening, there were whiteboard diagrams, mock up diagrams, they were being provided. But I have to write all those tests manually, and even if I am writing the automation test script like I cannot provide the ID look at us because I cannot inspect the element. It's just an image. When there are developers, they are done with their testing. It's like they have provided the URL. At that time, I'm inspecting the HTML elements. I now building my automation a script. These are for manual effort was the still there. The second challenge with automation script is like when you are running on Chrome browser, it's working fine, but on other browser failing because it's not on adding the same locator. Now, with the AI tools in the picture, when you feed the images to automation tool, they are actually generating locators which are working even when actual UI is being provided, they are intelligently from the bank if required. This locator needs to be updated. They are doing it themselves. The automation test, they are not failing because of locator issue. My role as a QA is still there to validate like the data validations. Okay, that part is still there. But UI part like there is a lot of easiness because of these AI tools, locator issue are not facing it any longer. So that's the main benefit in shift level approach what I noticed.
[00:20:02] Joe Colantonio All right. Say a tester is worried because like, this is going to take my job because I spent 99% most time on flaky locators, kept me employed all this time. What do you say to that?
[00:20:14] Gaurav Mittal No, I don't think so. AI is not going to take any job. Again, I'm saying like AI is just an assistant, but it's you who have to drive because AI is not having a brain. You are the person who has to write the test part of your project requirement. And AI can just help to enhance it to make it better.
[00:20:36] Joe Colantonio All right. So you seem like someone that's always messing around with things. What's on your roadmap? Are you playing with anything else with AI that you see could be helpful that you haven't implemented yet?
[00:20:45] Gaurav Mittal Yeah. So right now, like a lot of things, they are like production gets all coming and these are not bugs, they are just tickets for example, customers. They are requesting for something. How can we right now, like we have to look it manually, we have to look for what kind of solution we can provide. My roadmap here is this is one utility where I'm trying to utilize ML Models to figure it out like when the ticket is coming, it can furnish me some information based on the previous already popped up tickets. And the second part is like it's like again, like sort of email categorization. But right now, one team they have some emails, they are using the one extension in their automation tool, which can read the emails. And now from these emails the need to label it. Data labeling, it's a different ML model and you need to label, okay, this email is about which flag? This is a completely a new area for me. I'm exploring it how to label the data using AI.
[00:21:51] Joe Colantonio Nice. Since GenAI came out, AI has been around forever. But since GenAI came out like two years ago, it's rapidly evolving. It's a hard question to ask, but where do you see things going with automation maybe in 3 to 5 years?
[00:22:04] Gaurav Mittal Automation, it's like a race. And like it's not only about automation. I see like all the fields. So they are going to be benefited whether AI. Specifically with automation, it will be automation plus AI, for example, the utility, what I have mentioned email categorization using AI. This is an automation utility which won't have been possible without AI. In the future, in the three five years, I would say like there will be lot of scope, we will be able to use a different kind AI models which are coming. Along with the automation utility to reduce the manual effort. For example, in the healthcare domain like I can see, I have like a lot of new ML models are coming up like clinical mode. So that really is a rapidly increasing. Doctors, they are trying to use AI there. So along with automation, this utilizing AI models, it will be very helpful.
[00:23:01] Joe Colantonio Awesome. If someone's listening, where do you think they could get started? If they wanted to tip the toe into like the open-source AI thing to help them with the automation. Is their one library to say, you've got to try this. You probably can get benefit right away.
[00:23:14] Gaurav Mittal My favorite one is an NLP. I'm a big fan of text classification. So I would say start with a natural language processing. And the easiest example you can start is like like looking over restaurant reviews. So restaurant reviews say it's like it could be a structured text, honest structured text, and how NLP model can give it output. Okay, this is a positive review. This is a negative review. That's the easiest way to start with. And it's very interesting. Some libraries you can use like column transformer libraries is basically you can use.
[00:23:50] Joe Colantonio There are other uses that you see shift left or being not done correctly or things that people mess up?
[00:23:57] Gaurav Mittal Yeah, that is correct. Like if it's a complex UI, for example, like when you are discussing the whiteboard diagrams and it's a messy diagram or the requirement that changing quite frequently, like in Selenium, Selenium has always struggled with dynamic UI. Same issues with the shift left approach. AI cannot help you with that. Again, like you need to invest time manually.
[00:24:20] Joe Colantonio Absolutely. I seen another trend with people moving from Selenium to Playwright. Little off topic. What are your thoughts on that?
[00:24:26] Gaurav Mittal No, that's 100% correct. Selenium works from the back end. What Playwright is likely on the UI? It's quite fast and the steps like I have seen, the performance of Playwright is much better than Selenium.
[00:24:41] Joe Colantonio Controversial. Nice. Do you see an implementations for these libraries? Maybe incorporate Playwright with some of these AI models?
[00:24:48] Gaurav Mittal Yeah. Yeah. ML model is completely like a different program. You have to just invoke it from your automation utility. What happens? I give you an example of AWS architecture. Like, suppose you have written a lambda function and that lambda function is performing several tasks. You can keep your ML model in S3 bucket as a pickle file and just invoke it in your lambda function. So your lambda code is there, which could be like you are saying, about Playwright code so that code has to just invoke your ML model. Okay. Even in the Playwright, whenever you deploy any piece of code, it could be jar, word, this kinds of formats ML model was it is working. You have to deploy the a pickle file format. To invoke that pickle file you can just call one API. Okay. That API will take the input, invoke the S3, invoke the lambda function, lambda will invoke S3, will give it output back to your Playwright code. It is very doable. It is quite easy.
[00:25:52] Joe Colantonio Very cool. Okay Guarav, before we go, is there one piece of actionable advice you can give to someone to help them to A.I testing efforts? And what's the best way to find or contact you?
[00:26:02] Gaurav Mittal Contact me. It's my email address. And like on Twitter, Guarav1995, you can always reach out to me and I would be very happy to help in any kind of automation task involving AI. I would just say, please just go with the flow. And like if AI is required in the automation, you will get it stuck and when you will Google it, you will also find a solution. And AI models, most of them like until now what I have encountered, they all are free. They are open source. So don't worry. You have to just write some piece of code for your ML model. Deploy it and in your actual code just invoke it. It's quite simple like two different kind of programs. They are interacting with each other.
[00:26:48] Joe Colantonio Do you Google it or ChatGPT it now?
[00:26:49] Gaurav Mittal No, I Google it. Mostly I Google it.
[00:26:52] Joe Colantonio Really. Yeah.
[00:26:53] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a523. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:27:28] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:28:10] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.