AI as Your Testing Assistant with Mudit Singh

By Test Guild
  • Share:
Join the Guild for FREE
Mudit Singh TestGuild Automation Feature

About This Episode:

In this episode, we explore the future of automation, where natural language bridges human intent and machine execution.

Mudit unveils the concept of an “AI testing assistant” that interprets natural language to generate and manage test scenarios seamlessly. We'll discuss how this revolutionary tool supports popular frameworks like Selenium, Appium, and Playwright, enabling quality engineers to enhance existing code and execute tests more efficiently.

Get ready to learn about innovative features like error classification, root cause analysis, and flakiness detection that streamline debugging and improve test reliability.

Join us as we explore Kane AI, LambdaTest's smart AI-powered test agent for high-speed Quality Engineering teams. That allows you to create, debug, and evolve tests using natural language. natural language processing-based platform, designed to democratize quality assurance and tackle the challenges of CI/CD integration.

Listen up!

Exclusive Sponsor

Are you looking for a smarter way to optimize your test automation? Kane AI, from LambdaTest, might be your next must-have tool. Powered by AI, Kane revolutionizes how you approach automated testing by helping you identify bugs faster, improve test coverage, and increase overall efficiency. Whether you're a developer or a tester, Kane simplifies writing and managing tests, giving you more time to focus on what matters most – delivering high-quality software.

Take advantage of this cool innovation! Sign up now for early access https://testguild.me/kaneai to the private beta and be among the first to experience the future of AI-powered test automation.

About Mudit Singh

Mudit Singh

Mudit Singh, Head of Growth and Marketing at LambdaTest, is a seasoned marketer and growth expert, with over a decade of experience in crafting and promoting exceptional software products. A key member of LambdaTest's team, Mudit focuses on revolutionizing software testing by seamlessly transitioning testing ecosystems to the cloud. With a proven track record in building products from the ground up, he passionately pursues opportunities to deliver customer value and drive positive business outcomes.

Connect with Mudit Singh

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:00:34] Joe Colantonio Today we're diving into the crazy world of AP- driven automation assistance when Mudit Singh from Lambda Tests. In this episode, we'll explore the future where natural language bridges human intent and machine execution together to help you with our testing. Mudit unveils the concept of A.I. testing assistants then interprets natural language to generate and manage test scenarios seamlessly. So we'll discuss how this awesome tool supports popular frameworks like Selenium, Appium, and Playwright and helps enable quality engineers to enhance existing code and execute tests more effectively. Get ready to learn more about innovative solutions like Error classification, root cause analysis, and flakiness detection that streamlines debugging and improves test reliability. Join us as we explore KaneAI, Lambda Test Smart AI-Driven Test Agent for a high speed quality engineering teams that allows you to create, debug, and evolve tests using natural language and is designed to democratize quality assurance and tackle the challenges of CI/CD integrations. You won't miss this episode. Check it out.

[00:01:42] Joe Colantonio Hey, before we get into it, are you looking for a smarter way to optimize your test automation? We'll KaneAI from Lambda Test might be your next must try tool. It's powered by A.I., KaneAI revolutionizes how you approach automation testing by helping you identify bugs faster, improve test coverage, and increase overall efficiency. Whether you're a developer or tester, Kane simplifies writing and managing tests, giving you more time to focus on what matters most delivering high quality software. Take advantage of this cool innovation. Sign up now using the link down below and let me know your thoughts in the comments as well.

[00:02:18] Joe Colantonio Hey, Mudit. Welcome back to The Guild.

[00:02:21] Mudit Singh Hey, Joe. Thanks for having me. Really great to be back over here and again, chatting with you. Really great to be back here.

[00:02:29] Joe Colantonio Yeah, I always love speaking with you. I know you did a tour of the U.S recently?

[00:02:33] Mudit Singh Yeah.

[00:02:34] Joe Colantonio I guess I'm already off script, but I'm just curious to know, you speak with a lot of people. You're very personable, very knowledgeable. Is there any things you found during your little tour of maybe issues you see that people are still struggling with that you were maybe caught off guard by or maybe you were validated by?

[00:02:51] Mudit Singh Yeah. There's a lot of thing like it was a very eye opening tool for me. I did a tour of around 18 cities in the span of three months, I visited Dallas, Austin, Tampa, Houston, like I nearly every city in the East Coast, West Coast, even something in between. I did a lot of that. Got a chance to interact with nearly 350 people across startups and enterprises alike. And a lot of things were brand new for me as well, like, for example, we have been in this ecosystem of automation and testing and quality engineering a lot, but then, things that I thought were a problem started was solved five years ago or six years ago, they are still a problem even for big enterprises. Big enterprises are still struggling with challenges like how to write tests. And we have always talked about automation and the automation is has been around for more than a decade. Lot of people have been doing it at scale. But then what about automation of automation? What about adopting tooling around automation and CI/CD? For example, I did a survey of around 1600 companies and got to know that even though 89% of them uses CI/CD tooling, only 49% of those companies are using CI/CD to run their automation tests. Their automation tests are still clicked manually right now as well. The problem started, I thought I've been building Lambda for seven years and I've been talking about we have been talking about automation testing a lot. But then never got a chance to do a very in-depth deep dive of the enterprise ecosystem, as everybody likes to say here. And then, when I interacted with them I got to realize, yeah, there are some problems that we talked started five year old about six years ago, but they are still there either because of processes, because of there is not a buy in from leadership. There's a lot of obstacles or roadblocks in between, but then there are still problems that has to be solved. They are done as well. There are people who are very ahead in the game adopting AI/ML tooling and technologies in their quality engineering game. But then there are people who are lagging far behind. And that's kind of motivated us to build more on the Lambda test, validated a lot of our product line as well that are struggling with these challenges. Specifically, our product lines are hyper-executing that helps in the CI/CD process of the quality engineering space so validated that and at the same time, help us think new solutions on what has to be built specifically to bridge that gap that we didn't thought was there. And that's how we came out with the KaneAI the new NLPs the starting platform that we recently released.

[00:05:32] Joe Colantonio All right. That's I want to get to. I know you take a lot of feedback from users and you build it into your product. Recently, I think it was fairly recently, maybe a month ago, could be a little more, a little less. You released something called KaneAI. I guess at a high level, what is KaneAI. And then maybe we can dive in to maybe painpoint you heard from people and how maybe KaneAI can help them with those painpoints that you heard about earlier when you were here in the U.S?

[00:05:55] Mudit Singh We have been talking a lot about AI and we have incorporated AI based Technologies at Lambda tests, as well as specifically Lambda test product line. And we have focused very heavily on AI's feature set or AI's capability to find insights from large set of data that is run of hundreds of thousands of automation test. Find out things like flaky test, do error classification, and all of those things. CognitiveAI or PredictiveAI things we have been able to do incorporate it a lot in our Lambdas feature set. That has been there for nearly 2 and half years. But then we started to think about what to do, like what more features set we can build up, specifically using GenerativeAI. And then I got a chance to talk with a lot of enterprises and then we figured out, yeah, Test Authoring even in the age of Selenium and Playwright and so many different tooling and so many different codeless tooling available as well. There is still a space that can be revolutionized a little bit and that's how we came out, let's create a platform that helps and create natural language based test automation. That's one of the one of the best use cases of the AI based or GenerativeAI technologies. Create automation tests, not just create, create, evolve, and debug tests using natural language processing. And that's what the KaneAI platform is about. The name comes from our first user. Brad Kane The first user or first investor or you can say of Lambda test platform and he has been around with us for most seven and a half years and we really, really appreciate his feedbacks and his support on our platform so far. And we took that moniker, then named Kane, and ran with the agency, so KaneAI. That's how the name came. But the KaneAI's capability is to create, evolve, an debug test using natural language base processing. And one term that infact the ... Team coined for us was we are kind of democratizing the way of quality assurance. Our aim is to bring more people into the quality assurance and natural language-based test authoring helps us do that. Even right now, it requires a lot of special skill sets to write an automation test or even using a low code no code platform. But simple natural language-based test authoring will enable a lot of other folks get into process. Project managers, product managers, even the CX level people, they can become involved in the quality assurance process. Write their test simple natural language and maybe all the tests in the natural language as well. Held the quality assurance teams, development teams, engineering teams in parallel. Kind of expand the horizon of people involved in the quality assurance to right tooling.

[00:08:43] Joe Colantonio Right. So I want to talk a little bit more about the natural language. It's a concept that's been around for a long time, like the use of keyword driven automation. QTPs used to have dropdown so you could select things. You have things like business process testing. Then BDD came along and people started using that kind of as a natural language. But AI, how does the AI make natural language different? Because a lot of people I speak to use tools that just write the tests for them in natural language, and that's what's natural language plus AI. So how does AI come in to play when you talk about natural language, I guess the question?

[00:09:16] Mudit Singh Language processing once and then then they have been a game changer in the speech. When we talk about low code no code this tooling right now. There is still a learning curve involved in that and specifically around assertions, specifically around defining objectives, specifically around defining like these are high level things that I want to achieve. These are the inputs. These are the outputs, even simpler dumps. Creating with these low code no code tool as you mentioned, like creating using workflow based things. There's a little bit of friction in that. And yup, I'll say it's a little bit simpler than writing a code base automation tool, but there is a lot of friction in that and that is one of the reasons why they have not been able to or they have not been able to kind of do a very big revolution in the area. There is still a skill set required to do all of that, use of those tooling, but just separate from the quality engineering side of things or maybe quality standards, side of things. Tooling itself becomes a challenge. That is what we wanted to change. Now, it's very natural to express yourself in our language in the way that we're talking right now. We are talking right now and the way people have been using ChatGPT or tooling similar to that is the kind of things that you can see over there. You write something, you ask an agent, ask kind of you can say a coworker. But these are the things I want to do. Our software solution should be able to do these kinds of things, and we should be able to get these kinds of outputs. And that is pure into Natural language. If a machine can understand that part and perform those directives, those objectives perfectly, then that's a better way to interact with the system. We have been talking about people have been talking about API, the application programing interface. So it's like a human API. So converting the human language into something that machines can read.

[00:11:13] Joe Colantonio All right. So that's an interesting point as well. You call this like an AI testing assistant?

[00:11:18] Mudit Singh Yeah.

[00:11:18] Joe Colantonio Not a replacement?

[00:11:19] Mudit Singh No.

[00:11:19] Joe Colantonio Some people actually call it a replacement. This is an assistant. How does this assist me then? Do I then do I say do I talk to and say, Hey, write me a script that logs it's my application, adds a patient and perform surgery and then build them? Or do I have to perform those actions and it knows how to write natural language for what I just recorded?

[00:11:40] Mudit Singh Not, it's even simpler. For example, I can go to KaneAI and say I want to test a logging workflow Lambdatest.com Go and let it do that. It will go directly go Lambdatest.com, click a log in, enter the parameters that are entered. Maybe for example, I can write a prompt, do a login workflow for Lambda test where user name is XYZ, password is ABC and based on that will be able to perform the whole workflow. Do you know that, it's not 100% foolproof? So that's why over that we have added features for manual interaction as well. For example, in between you want to stop the application, do some kind of manual action to tweak it. You will be able to do that. And there is another way to do everything as well. We have created a way to do it too basic. That means if you already have running automation scripts, you can run them at Lambda test. We will promptify for you, we will create a prompt page are out of those running scripts for you and then, run it on KaneAI. You can evolve those scripts using natural language and then create the code that will append in your current existing code so you can evolve your current testing codes using an NLP based AI Test authoring. Simple human instructions will help you start your automation journey.

[00:12:56] Joe Colantonio All right, so just expand on that then. It writes the natural language for you, but obviously underneath that is technical scripts that the code that actually performs that natural language. So is it Selenium? Is it Playwright? And can I take that down and just run it on its own without the AI Assistant.

[00:13:13] Mudit Singh Yeah. So we support Selenium, Appium, Playwright so far, mobile Appium and the frameworks for web Selenium, Playwright, Cypress and like nearly 35 other frameworks in other languages as well, so JavaScript, Java, Python. All the code, like all the prompts that I mentioned, we create a complete test scenario, run the test scenario and you can export that in the programing language of your choice. It's agnostic in that. And then you can evolve that code if you want the code in a code-based way. It's other way around as well as I mentioned. Already existing code, you want to evolve it, you can do that as well. All right.

[00:13:54] Joe Colantonio About that point that you have like a 5000 Selenium test and I want to start say I want to AI enhance it, I can feed it into this tool, will it convert it to natural languages? Is that something that I would do? And then I can utilize all the AI based technology once it's in the platform?

[00:14:10] Mudit Singh So yeah, so in fact that is something. So Lambda tests are core expertise has been in executing the test. As you know, we have for past seven years, we have built up the framework, the processes around to execute your test at a massive scale across different browsers, operating systems, mobile devices and whatnot. We have the infrastructure in place to execute your tests and we have been providing a lot of insights over the test execution as well. So whenever you run, let's say a Selenium code base. We filter out by each Selenium command, each Selenide command and give you the result, screenshots, videos, logs, and everything based on those command. What we are doing is essentially converting those Selenium run commands back into prompt that can and then you can use that to evolve your tests and do more. For example, after this log in process, now let's go into a checkout process and add in on the current code, right? So you can evolve this or maybe you can also maintain it. Why your test is failing because the log in process has changed. The username password has changed or maybe the sign in process has changed. You can evolve it, change the sign in username and password to ABC. From ABC to BEF, from XYZ to CVB or something. You can do something like that. Evolve the test using the natural language that is something you can do.

[00:15:35] Joe Colantonio All right. That's another good point. So your experts as well as running on things on multiple devices in the cloud. Running a test in parallel, If someone starts off using your AI assistant, do they have to worry about creating the script in a way that it is able to be run in parallel? Because a lot of times people have dependencies like for some reason test A needs to run before Test Z. Does this help at all over any of those dependencies or anything like that?

[00:15:59] Mudit Singh So we have a built in this manager in place. Whenever a test has to be executed, we have. First of all, the whole KaneAI platform is tightly integrated with our code execution platform that hyper-execute. Any tests that we created on KaneAI would automatically execute on the hyper-execute site. So that means that infrastructure for running those tests are covered and therefore any test you create here will automatically be migrated or run over the hyper-execute site via our test manager built in as well. So that will help you organize your test suites, manage and everything to do with micromanagement of which test has to be done before. For dependency part of things. Usually, if you are right now focusing on so because KaneAI is still in beta are the core features that we have right now are based on those web and mobile applications that are there online. But I think having the paid dependencies injected in in that system, that is something we are working on. We have an API based feature set in place that inject the dependencies in the working environment and then executed by using our create test using an NLP. So but yeah, that is in place. I think that would be there by ....

[00:17:15] Joe Colantonio Does that help with test data then as well, you can inject random test data?

[00:17:19] Mudit Singh Exactly. So that's the in fact the thing that we are working on right now. Really high level enterprise complex workflow, so you can define it in your NLP based but then you would really need to add scale you really need to inject stuff like CVS or PDFs or some kind of data that is required to run test at scale. So that is something we are working on. But the other way around is that if you are well versed with coding in aspects, you can write the boilerplate using NLP, using Lambda test and then evolve it using your code based stuff.

[00:17:54] Joe Colantonio That's good. It's like they're not locked and they have to use it going forward. They can use it to get started as a starting base. And then if they're like an SDET, SDET can go crazy with the code that help generate nice.

[00:18:04] Mudit Singh Yeah. As I said like these are the core problems that we want to solve rather than create new ones. Migration has been a big challenge. Let's say a new time, a new tooling comes in place or new technology or frameworks comes into the place. So it's the migration has been a big problem and we do not want to create another tool from which you have to migrate to or migrate from. There's options to that. And in fact, migration is one of the use cases that we plan to solve as well. For example, if you are using let's say, Selenium 2.x or Selenium 3.x. You can run it on Lambda test, you will create a prompt out of it and then export it back in let's say Selenium 4.x. So you have essentially migrated your code base from older version to new version or other way around, run Cypress code and then migrated to the Selenium or Playwright or vice versa.

[00:18:57] Joe Colantonio Wow! That's a big people ask me all the time, Hey Joe, I have all this Selenium test, how do I migrate to Playwright? First I ask, why do they want to do it? But if it's a legit use case for it, it sounds like this is almost like a this will convert it right back and forth?

[00:19:10] Mudit Singh Yeah.

[00:19:10] Joe Colantonio Nice.

[00:19:11] Mudit Singh It can then converted. Yes.

[00:19:13] Joe Colantonio That's very cool.

[00:19:14] Mudit Singh I'm not saying it will do 100% conversion, right now but at least, help in a lot of ways to help them do that. Yeah.

[00:19:20] Joe Colantonio All right. So a lot of things people struggle with a lot of times is having multiple code bases so they have to run test. Same test against web as they do on mobile. I know that vastly different. And so maybe the answer is no and maybe doesn't make sense. But if I had a test suite I know runs against my web app and it's the same kind of flows, but maybe probably not the same steps, I want to run against a mobile device. Can I do that with just one script or is that something that's still going to need to figure that out?

[00:19:46] Mudit Singh So I think the prompt, if the same, you can definitely copy paste and run the same prompt to create the same steps, but if it's the same script because ultimately we have a prompt based authoring, you can use the same prompt for the mobile app and the same prompt for web app. For example. And I have this demo for using Swiggy, which is all online for ordering app. Very popular in India. Here what we do is I created a prompt that I want to order this particular dish from this particular restaurant where I am user of this particular location. So that particular prompt I can use for web app and the same prompt I use for the mobile app. But of course, the scripting is a little bit different because one is using Selenium Python and then another is using Appium. So but yes, same prompt work. That's something that can help. But other way around, migration, it's great to do the same thing across different browsers, resolutions, browser versions, but cross-device, that will be still a challenge. Right now, I think that is something I can explore as a feature set.

[00:20:50] Joe Colantonio But because you also have all these other capabilities on the backend, can I just say, hey, I've run this against iPhone 15 pro or just automatically knows what to do?

[00:21:02] Mudit Singh Exactly.

[00:21:02] Joe Colantonio Without configuring it.

[00:21:04] Mudit Singh Yeah. Yeah. That works. Or running against this particular browser, browser version send it against iPhone 15 pro, running Chrome browser version or running Firefox, running Safari. We can certainly do that.

[00:21:18] Joe Colantonio Does it also write tests for you or is that does it take the test that you have and say hey, you have a gap? Does it know the context of your application. I guess the question where it knows maybe you're not covering this feature or maybe let me generate some test. I know a high risk for you about that coverage.

[00:21:34] Mudit Singh That is something that. Okay, so because somebody asked me just yesterday as well, it really helped me in doing so. This would be a problem around code coverage. I have created a sign up test. Will it also do a log in test as well, not right now. But I have this feature set. I migrate out yesterday. How we can help you evolve or extrapolate your current test cases into new test cases. As I mentioned, we have a test case manager or test manager and DevOps also we are using GenerativeAI by creating new test cases. We can definitely help evolve the current prompts using that tooling. But to give it direct insights, for example, we are running your test these tests right now. Do you want to run these for their A plus B test as well. That is something we are not doing. But yeah, that's a definite a good use case.

[00:22:25] Joe Colantonio A lot of times you hear like this tool is flaky or this language is flaky when in fact, it's the way they wrote it is flaky. Like a lot of times Selenium may get a bad rap, but in fact, if you look at like this is a poorly ran Selenium test, does this have built in best practices, I guess that it knows, alright this is Selenium test, let me add a way without having this guide, I have to figure out what wait mechanism to use to make it more resilient.

[00:22:47] Mudit Singh Yeah. So in fact, that is very important point. Flaky test, that has been my favorite topic for nearly six and a half years now. Flaky test comes because of multiple issues. Of course, one of those is there is a network issue, there is an issue with the hardware itself. And then, of course, system under test has changed, nobody told you about it and the tests are flaky. And then, of course, the test code that you have and to check the other code is not also properly. Lambda test, our seven years of expertise has been to build the right network and the right environment capabilities for that. Because of hardware, your tests are not flaky, so that is something we already had covered. Now it comes out to writing test cases that are inherently not that much like in nature. One of the first approaches when we were building the Kane for KaneAI is that we went in our visual first mode. That means any prompt that you write, we check it visually in this DOM that was created whether that element is available or not. And then if that is available, we work with the visual code itself. Actually, that's the way a user will test. So for example, if I say log in flow for Lambda test, it will actually find visually the log in button and then click on it rather than do a let's say a simple locator based stuff. And in similar notes, this is a feature that we have for our other platform that is I could execute and other automation platform as well is auto healing. Auto healing, using your fastest data and using different locators. You run a test at Lambda test, next time you can run the same test at us at Lambdatest.com but the locator somehow has been changed either in the code or maybe in the backend somewhere. We will try to run same test using different locators that you have pre-stored when last test was running and tried to run the test using these other locators if any tests pass because of that locator, we'd notify that this test was failing based on current configuration, but if you use these locators, these tests will pass. Let me know if you want to run it using this locator and mark it as pass. Kind of that locator based auto-generate. That is also a big debt. Doing a visual first approach automatically helps us cut down a lot in fact, very huge number of flaky tests over there because now we are testing as an end user and then doing a locator based stuffs because sometimes what happens is for example, an element hidden, it's not a visible, but locator is there and still marking it as passed, even though that element is not there, it's not there in the system. So that visual first approach will tell you that even though you are asking me to click on this button but this button is not visible to the user or maybe it's out of even the screen because somehow somebody messed up the CSS. That kind of things and becomes easy.

[00:25:32] Joe Colantonio Nice. I love that. I guess another thing, sometimes people struggle with to say, I'm a developer, I have a bug, I fix the bug. Now I need to validate the bug and I'm like, Oh Gosh, where's this test live, this functional test, I don't know. I thought you said something about tagging where you can just like and maybe I'm wrong in Slack say hey Kane run a test that touches this feature or those functionality. Am I remembering that correctly is that something that that this does?

[00:25:57] Mudit Singh Right now we have integration with Jira. We are working on things like a Slack Microsoft teams and other communication based tooling integration. What that means is you can write a prompt directly from Slack and run in KaneAI. So in fact, this is something that would be live this year itself. And yeah, that's the use case that definitely somebody can do or other way around that already have a PRT or ticket built in in Jira. I can click on a button over there so I fix that issue. I want to run it and validate it so I can go to my Jira ticket, which obviously every developer has opened up in a separate screen, right? I open it by my Jira ticket and click on a button and it will read the requirements steps or whatever PRT is there in the ticket and run those tests back at KaneAI. With a click, they will be able to run those steps directly at Lambda test KaneAI.

[00:26:52] Joe Colantonio Nice. I love the idea that maybe I get a user reporting an issue to me.

[00:26:57] Mudit Singh Yeah.

[00:26:58] Joe Colantonio And they say they have an iPhone 12 or something and like, I don't know, can I duplicate it, replicate it with their platform because it integrates with the backend. It's as easy as me just saying run this test against whatever android with this flavor and you could see with the videos and everything that. Okay, I see now what they're saying. Is that true?

[00:27:17] Mudit Singh If you have a detailed user stack that these are the actions users performed and if you want to run it back on our platform on a specific use case, on a specific device configuration, you can definitely do that

[00:27:30] Joe Colantonio Very cool. I also saw something about a two-way editor that makes maintenance easier. Is that something you could talk a little bit more about?

[00:27:38] Mudit Singh In fact that is the same thing that I was talking about. You have to always sync between your code that you already have present and you want to evolve it using KaneAI that you can do so. And the other way around that is you already have a prompt based KaneAI test and you can export it as a code format.

[00:27:57] Joe Colantonio Love it. All right. So I know you said a lot of times you spoke to people they didn't have automated tests integrate with their CI/CD systems. Does this help at all with that?

[00:28:05] Mudit Singh So right now, we are working on that direct integration, but our platform that we have the hyper-execute, that is something we innovated before KaneAI. We have a CI/CD features built in the hyper execute platform. Hyper execute has this cool Yaml based CI that can help you create or orchestrate your tests that you have already built in. A lot of orchestration, a lot of you can say karate or ninjutsu required to orchestrate and scale up those tests so that things hyper-execute already tackle. So things like auto retries, load balancing, so distributing your hundreds of tests across 50 machines that you have in the most efficient manner so that this have the least test execution time, things like do a fail first mechanism. For example, you run 100 tests last time out of those 20 were failing. So next time you run the test, the failing test will execute first. That will give your developers the fastest feedback whatever you have built and the changes in the system. These 20 tests will validate that which you are making earlier they are now execute first and give you a feedback that yeah, everything is working and rest will continue to execute to build up confidence. Those kind of feature sets we already have big then. Now hyper-execute platform and this of course KaneAI will definitely as a is it running on the hyper-execute, you will have those same feature sets available in KaneAI as well.

[00:29:40] Joe Colantonio Because once again it's I know another thing a lot of people struggle with is debugging. They come in and they run the test overnight. Still, they come in, they have a hundred failures and they go try to debug them. By the time they're done debugging, someone’s already kicked off the next build and it's like I can't keep up. Is there any AI-powered debugging features to help them triage really quickly to know, alright we have 100 failures, but fix this one thing, you fix 99 of them or something like that.

[00:30:04] Mudit Singh Yeah. So that's inside as the feature set. As I was talking about earlier that we were talking about, what are the core features for AI that we can incorporate in Lambda test? The first thing that we came out, it related to cognitive AI feature set. That means look at hundreds of thousands of test and see what insights can be found over there. And one of those insights was error classification. Let's say you have 2000 tests. Out of those thousand, there's 200 breaking but because we have access to all the test executed, all the testing data, we can look at the data and give you patterns. So, for example, 200 tests are failing, but out of those, 150 are failing because of one issue. For example, your log in button has failed. Or your log in footer is breaking. We'll give you a classification of those errors that you don't have to debug 150 tests. Just this one issue. Just debug it and run it again and see if it works out. And then maybe you can spend your time and debugging best of the 50. And even in those 50, usually these are classified into few searched and at that aspect you are able to highlight out flaky tests that these tests were passing earlier or failing earlier and now these are giving different results based on your test patterns. We're even to highlight that nothing has been changed yet these tests start failing or these tests are passing. It's something there is a flakiness in these tests that we can look into. Help them to do classification or error classification, help them do a flaky test analysis. Another aspect that we are really proud of is in fact, that was our first GenAI feature was give them a root cause analysis or an error remediation. Because we have a billion tests that have been executed on a platform, all the error codes and why that error code came, we have an understanding of that. We are able to give them in the insights that these tests are failing because of this particular issue. You can go ahead and check these areas in your code and see if that can be fixed. Kind of give them RCA with a single click on why these test failing and the possible ways to remediate that. And I think this would be another target of GenAI this is that we can do a pull request back in the system and fix that test for them. But that is something we are still working on.

[00:32:23] Joe Colantonio That is cool.

[00:32:24] Mudit Singh Yeah.

[00:32:24] Joe Colantonio And I think the last piece is a lot of times when I was working back in the day, I had to do an Excel sheet and report every run, why it failed and then do a trend to my management every month or every two months. If I'm an executive and I have 12 verticals, they're all running tests and I just need to know, okay, how are we with our testing assumes like because you have all this data, does it give you like almost like a 360 report that they can look at a high level, say, all right, here's where we are, high level. Maybe if I need a dive down, I can have high level KPIs or something that lets them know why things are failing? I don't know if that makes sense, where they are?

[00:32:59] Mudit Singh Yeah, that's inside another tech stack. We've been talking about stacks of system. We have a test execution layer in place over the top. We have added orchestration and auto healing features and then over the top, because we have all this testing the data is reporting. So creating reports, creating dashboards, in fact, I'll say it's a little bit easier because we have all this data you are able to do. But the things that is really over the top is test intelligence. Observability is one of the things. Analytics is one of the things, you should be able to see where the health of your applications is. But truly this part of our AI is finding out insights from that test execution data. For example, give you a direct insight that your tests are like 90% of the tests are passing but are these 10% are failing? All of these tests will run on Safari so your execution is not working on safari because out of those hundreds of thousands of tests, you have written just 20 tests for Safari or maybe repeated those 20 tests across 500 different configurations, only Safari ones are failing, for example. And so give you those kinds of insight. It is breaking on Windows 11. It is breaking on safari, or it is breaking on these particular-the point here is. I have hundreds of thousands of this finding out insights from there. And even when we do a predictive analysis that these tests are failing, if the trends continue your next test, of course, will be failed as well because nothing has been made to improve that. That is pure in test intelligence. So that is also what we have built up.

[00:34:32] Joe Colantonio Nice. So is there anything else about KaneAI that you think folks should know more about, maybe a feature or a function that we didn't cover?

[00:34:38] Mudit Singh I think we have covered most of the things. Two-way reintegration that is there are other integrations things that we are building up so relevant for both web and mobile and covers a lot of browsers, devices, all that are available at Lambda test that is there. We are in early beta. Right now that means really select few companies are using and select for users are using the platform and we are rolling out access to more users even as we speak right now. And there are still on our feature sets that we have to build out. But overall, I feel that we are in a very right direction. Specifically the problem statement that I got back from big enterprises in my last three months and solving those challenges right now with the tool like KaneAI. I'm really, really excited about this new roadmap and capabilities that we are building and also looking forward to the new feature set that I can build over the top for this based on customer feedback.

[00:35:35] Joe Colantonio Could anyone sign up for the private beta or is it certain criteria they need to fall into?

[00:35:40] Mudit Singh No, right now we have opened up to very diverse groups. So it's you can go ahead and sign up and we'll be happy to open up access for you. But it's usually based upon the infrastructure we have available to execute those tests. As we are scaling that infrastructure up and we are doing it progressively, we are opening up access to more and more people. We have already given access to more than a thousand users who are using the platform right now and are scaling it up as we speak.

[00:36:07] Joe Colantonio Awesome. We'll have the link for that down below. Definitely check it out. Okay, Mudit, before we go, is there one piece of actionable advice you can give to someone to help them with their A.I automation testing efforts? And what's the best way to find or contact you?

[00:36:20] Mudit Singh The one advice is so as in fact that is something we were discussing earlier as well, is that the world is changing a little bit. And the way we write and run tests, in fact, the way we do quality engineering is also changing a little bit. Great to read more about it. Understand this technology more about it. It's not a thing to panic about right now of course. But it's always good to be a step ahead in the new technology game. Do a deep dive into this concept that will make you a better quality engineer and a better tester. That's like good advice. And yeah free to sign up for the beta platform and be happy to have the platform.

[00:36:58] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a520. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:37:33] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.

[00:38:17] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Three people are in an online video chat. The top middle displays the "ZAPTALK" logo, while the conversation is labeled "#1." Amid discussions, innovation sparks curiosity as participants exchange ideas on integrating cutting-edge solutions into their projects.

From Manual Testing to AI-Powered Automation

Posted on 11/21/2024

Test automation has come a long way from the days of WinRunner and ...

John Radford TestGuild DevOps Toolchain

Scaling Smart: Growing Your Development Team with John Radford

Posted on 11/20/2024

About this DevOps Toolchain Episode: Welcome to another exciting episode of the DevOps ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Cross Platform Testing, Tester 4.0, Playwright Salesforce and More TGNS142

Posted on 11/18/2024

About This Episode: How do you Achieving seamless cross-platform testing What does it ...