AI-Powered Tools in Software Testing: A Deep-Dive using Katalon with Mush Honda

By Test Guild
  • Share:
Join the Guild for FREE

About This Episode:

On this episode of TestGuild Automation Podcast, host Joe Colantonio talks with Mush Honda, the Chief Quality Architect at Katalon, about AI-powered tools in software testing. They discuss Katalon, an AI-augmented test automation platform, and how AI can optimize and enhance the humanistic process of software delivery without replacing human testers. The episode covers Katalon's Studios features like Self-Healing, Test Failure Analysis, Autonomous Script Generation, and more, all of which use ML models to generate automation code based on typed scenarios, making test scripts more streamlined and readable. Mush also highlights the benefits of autonomous test generation, enabling developers to focus on new feature development while AI takes care of regression testing. The episode ends with a call to embrace change and use AI to streamline work, not replace testers.

For more info on Katalon, check out their free trial now: https://links.testguild.com/pnG5y

Exclusive Sponsor

Discover TestGuild – a vibrant community of over 34,000 of the world's most innovative and dedicated Automation testers. This dynamic collective is at the forefront of the industry, curating and sharing the most effective tools, cutting-edge software, profound knowledge, and unparalleled services specifically for test automation.

We believe in collaboration and value the power of collective knowledge. If you're as passionate about automation testing as we are and have a solution, tool, or service that can enhance the skills of our members or address a critical problem, we want to hear from you.

Take the first step towards transforming your and our community's future. Check out our done-for-you services awareness and lead generation demand packages, and let's explore the awesome possibilities together.

About Mush Honda

Mush Honda

Mush is a seasoned senior engineering leader and AI and ML enthusiast with a proven track record of delivering exceptional results. With over 20 years of experience in Quality Engineering and Agile software delivery, Mush brings a wealth of knowledge to the table. Their expertise spans across diverse industries including healthcare, voice-analytics, mobile, and e-commerce. Furthermore, Mush holds an AWS certification in Machine Learning and is a certified cloud practitioner.

Connect with Mush Honda

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:25] Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. And today, we'll be talking about Mush Honda, all about A.I. power tools and software testing. We're really and take a deep dive into a lot of these different A.I. technologies and how they can help your automation testing efforts. Mush is a senior engineer leader. And he really leads the space in the AI and ML area, especially, he's a big enthusiast, and he has a track record of delivering exceptional results. I believe he has over 20 years of experience in quality engineering in agile software delivery, and he also possesses a diverse background encompassing health care, voice analytics, and mobile in e-commerce industries. He knows all the different areas. And he holds an AWS certification in machine learning and he's a certified cloud practitioner, so he really knows his stuff. He also is a chief quality architect at Katalon, which is an AI-augmented test automation platform that helps teams of any size deliver better software faster. Really excited about this discussion. You don't want to miss it. Listen up.

[00:01:25] This episode of the TestGuild Automation Podcast is sponsored by the Test Guild. Test Guild offers amazing partnership plans that cater to your brand awareness, lead generation, and thought leadership goals to get your products and services in front of your ideal target audience. Our satisfied clients rave about the results they've seen from partnering with us from boosted event attendance to impressive ROI. Visit our website and let's talk about how Test Guild could take your brand to the next level. Head on over to TestGuild.info and let's talk.

[00:02:00] Joe Colantonio Hey, Mush. Welcome to the Guild.

[00:02:03] Mush Honda Hey, Joe. Thank you for having me.

[00:02:05] Joe Colantonio Great to have you. It's been a while. I think the last time we spoke was like, 2019. I guess before we get into it. I think I botched your bio. Is there anything in your bio that I missed that you want the Guild to know more about?

[00:02:16] Mush Honda I think the most important thing that you missed completely is that I'm a self-proclaimed foodie. I love to try different cuisines from all over the world. So that's my go-to these days.

[00:02:26] Joe Colantonio Awesome. Do you have a favorite?

[00:02:28] Mush Honda Yes. I actually tried Indonesian last night, in fact, and that was something that was amazing. All credit to my friend here in Atlanta for making me more aware of what all we have in Atlanta for that.

[00:02:42] Joe Colantonio Nice. My wife and I are foodies as well. We travel to the location just to eat. So I'll definitely have to drive to Atlanta then check that out for sure.

[00:02:49] Mush Honda Absolutely. You should.

[00:02:52] Joe Colantonio Awesome. Very much. I thought we dive into it. I know it's a big topic. Everyone's talking about AI Machine Learning, but I think you have a unique perspective in that, you're an ML AI expert. Also, happen to work for a vendor that has a lot of customers all over the world. You can actually see it, how it's implemented, how users are using it, and how they're benefiting from it. So I just want to get your perspective on what your thoughts on AI and machine learning and software testing. Is it really a thing? Is it hype where it can help folks, things like that?

[00:03:20] Mush Honda I'll be the first one to say it actually sounds like more hype, quite honestly, just because it's such an interesting buzzword that everybody wants to talk about. I think there's hype that precedes everything that comes out with AI. And I honestly believe, one of the biggest things I observed is a lot of my fellow practitioners have the same mindset, I think, as we did maybe 15 years ago when test automation was coming onto the scene where everybody was thinking, this is going to now replace us as people. My first sort of reaction to that is absolutely not. I look at anything to do with AI and ML more as an accelerator. The way I would say for people to think about AI is almost in the same sense of a tool that will help us get more efficient and really help us not have to worry about the mundane stuff. If I were to summarize what I think of AI, it's almost like, I use the analogy of Iron man and Jarvis. It's like, Hey, I am in control, I am Ironman and Jarvis is that silent partner of mine who is doing all of the heavy lifting for me and bringing things, making me more aware of what's going on in areas I may not be focusing on, but I am still in control. I am in the driver's seat.

[00:04:45] Joe Colantonio Love it. No, I love that analogy. Another one I really like is someone says it's like a friend where they give you suggestions. You may take it, you may not. And sometimes it's a bad friend, but sometimes it's a good friend. So their advice, It's up to you to take that advice. What you do with it, I guess, is how it works.

[00:05:01] Mush Honda Absolutely. Absolutely. Absolutely. I think for me, it's always been about acceleration. And really being a lazy developer all throughout my career. I think that's where this comes in very handy to be able to say, hey, I have the underlying domain expertise. Let me now show you and teach you how to get things done my way and use AI and ML sort of in that context to be able to say, here's the context I want you to think about only and then go do these things of these activities for me using that context. And speaking of context, that's where all of the LLMs and all of that come into play. And that's sort of the new way of looking at things when it comes to testing.

[00:05:43] Joe Colantonio Love that. Speaking of context, do you think? it depends when I go on LinkedIn, I see different trends going on in different conversations, and some people are like, Oh, AI machine learning should be outlawed. It's not really helping testers, it's not really a thing. And then there are other people full on like it's going to replace us. What's the correct way to approach this? I know you said treat it like a sidekick, almost like a friend. How can it make us more efficient, do you think it's almost at a detriment of someone totally ignores the technology, maybe not drinks the Kool-Aid, but I thin Kool-Aid ignores it as well, I think that's probably a danger there as well. What are your thoughts?

[00:06:17] Mush Honda Yeah. I am sort of in between that broad spectrum, to be honest. I like any other engineer. I'm always skeptical as my first go to. I would say approach it in that same mindset. Right. Like I feel AI while I've mentioned it being an accelerator. It can do a whole lot of heavy lifting for you in the sense that the processing power that it contains may be the way you should think of leveraging that as a benefit. Case in point, I use an example that I've seen come up a lot, right? We've heard different people come and talk about, Hey, how do I use ChatGPT as an example? Now Openai has an API version which is great, I think from a tester's perspective. I think of it as an awesome idea sounding board. Let me use something like that with the proper context and let me start thinking about how I should apply different strategies, different validation points, and different criteria in the context of the software that I am responsible or helping out test, and essentially this confidence in its delivery, in its software lifecycle. I think of it more as that optimizer and inside body, so to speak, and giving it context around what we essentially wanted. One of the things that recently came into play for me personally was, just like automation. I think manual testing is going to be and continues to be a critical component of the testing activity. I've had discussions in the past, quite honestly, where people are surprised when I bring the mindset that I believe software testing and software development, in general, is a humanistic process. It is not something that is robotic. We use tools to enhance that humanistic process of software delivery. And I use, and I think of machine learning as that component that will help us optimize what we do. Essentially, help me get better with how accurately stories are provided to us or how accurately are reports created, how can we get value out of all of these activities that we combine and do with engineering in general, and how do we make the process better? That's where I think of tooling and assign the tooling label to A.I. and not as a human replacement or a human bot, so to speak.

[00:08:44] Joe Colantonio Great. So what are some contributions then that AI can play for testers? A few key areas, you think? Like you said, it acts like a guide. What are those areas then that it could help a tester with?

[00:08:55] Mush Honda Yeah. So one of the key things, I think from a Katalon's perspective that I can bring in right, is what we've done is we've actually launched a plugin, a Jira plugin that essentially, is free to use. But what it essentially does is it lets you identify when you put in your user stories in JIRA, it can actually connect and use Open AI to create a sense of what are the different manual test cases, scenarios, and steps should be associated with validating that user story. And what we have done behind the scenes is sort of power up front engineering, so to speak. So because of our experience with testing and seeing how different testers actually want to validate certain user stories, we've built that information into our plugin that allows testers to quickly generate all of that. And what that does is it acts as an initial baseline. I would not ever expect it to be 100% accurate or effective. But what it does is it begins as a starting point for me as a tester or as a test lead or even as a product owner to be able to say, am I doing the right level of information in my stories that will prevent unnecessary overhead for the testers where they would have to come in and keep going back and forth clarifying information. If I can get a general idea of saying, Hey, am I giving enough information in my user story so that way scenarios can be easily documented? A plug-in that can do that in a matter of seconds versus a couple of hours for example, when somebody from the team actually looks at it, processes that information and then comes back with additional questions. It just helps accelerate the whole delivery process. I think one component of that in the context of acceleration is an example. The other context of this also obviously is to say, Hey, now that I see a base example of what scenarios are being driven by this user story, what else is missing, and me as a tester with the context of the domain of the system that I am testing, that enables me to say, Well, yeah, this seems to be on the right track, or No, not really. That means there's potentially a gap in the user story. Let me clarify the user story a bit more so I can generate more accurate scenarios or it becomes a baseline to say, well, out of the 50 that I need to do, this story, top 30 seems to be in line with what I need to do or maybe I need to combine these. It acts as a great sounding board and gives you an initial set of goto scenarios that you could quickly then extend out or say, Well, this meets my risk assessment or risk evaluation criteria. That's an example of where I would use that.

[00:11:47] Joe Colantonio Nice. A lot of testers are big on mind mapping. This almost sounds like a way for brainstorming where not every idea is great, but you jotted down, you see where it connects, and it's almost like a guide to get you thinking and going, Oh wait, maybe there's some risk here. I'd even think about that it brought up. Well, maybe it's not, but at least helps prompt you to start thinking. I think it's a great partnership. Almost like you said, it's a guide and it can help you kick off some ideas, I think for sure.

[00:12:10] Mush Honda Yeah. No, absolutely. And I think, just continuing on with that sort of mindset. In terms of partnering, another item I keep thinking about is, I also associate these A.I. models or machine learning models really to act as my pair programming path. What I feel it's fascinating is in the same lines, we've actually also developed another feature in that out in beta for our product Katalon Studio. It's something that we are calling as a studio assist. And what that essentially does is it leverages your LLM models. Again, you can put in your key for your OpenAI connection but then want to let you do is within your IDE, you can essentially type out the scenario that you want to test and it actually generates the code for your test automation script right there immediately for you. And what I've been doing, quite honestly, is using that almost as my programming path to be able to say, Hey, I know these scenarios. Let me create and let me label this in a certain way so that way all of my existing keywords can be leveraged. And what I found is it's almost a step up from the first leap most testers do when they first get exposed to test automation, which is record and playback. As I'm sure you've seen a lot of the low-code no-code tools that are out there for us practitioners start off by saying, Oh yeah, it's so easy to use when you actually dig into it. It's nothing more than record and playback. And what I'm seeing is with Studio Assist, while the studio platform enables you to perform record and playback, if that's what your thing is. From my perspective, leveraging studio assist using ChatGPT or OpenAI is really what it enables you to do is it enables you to step up away from that traditional record and playback because the one biggest disadvantage everything remains static, unless you go in and then say, Oh, now I want to use variables, I would use data to make my scripts more reusable, etc., etc. you're stuck. And then what this essentially lets you do is it lets you walk away from having to be tied in with just record and playback. And what I've also being able to do with it is be able to create scripts very, very easily, even without having the system on the test up and deployed environment. It's another way of thinking of it as almost shifting left, so to speak, where I can create my automation scripts preemptively, just like I would with my manual test cases, for example, and not have to worry about, well, I have to wait for the system to be implemented before I can essentially do a record and playback. It's helping even like the non-technical testers, in my opinion, to be able to leverage and get the benefit of, again, tools to be able to accelerate their life cycle. As a practitioner, I've always seen that and I'm sure you have too, is while we talk about quality being a team sport, when it comes down to roles, it's typically people starting to look to us to say, Hey, what can you do to make it more streamlined and give us confidence in the quality? I think these are tools that are very, very practical for us as practitioners to be able to apply in our day-to-day workflows.

[00:15:42] Joe Colantonio I love this. When I was a performance tester, I used to use Load Runner to do record and playback of me interacting with an application just to point out dynamic values. But it seems like this would automatically not only do that, but it also says here are the dynamic values, let's parameterize it for you automatically. Really would be that up and I think make it more consistent where you don't miss things as well it sounds like.

[00:16:02] Mush Honda Yeah absolutely. And while the product is in beta right now, it's available for people to download. But what we want to make sure of is, to keep evolving it. We're always coming at it from a pure practitioner's perspective. Yes. All the bells and whistles and all the right keywords should exist in all of their products, for sure. But I think what sets us apart is we are purely coming at it from a practitioner's perspective. It's not just about using the latest buzzwords, but practically applying what can we use in today's teams, especially with the pressure of time to market and higher confidence in quality. What are the practical things and what are the practical components and features that should be put into a solution that helps me as a tester be more effective and put less pressure, for lack of a better word?

[00:16:54] Joe Colantonio But absolutely. So you mentioned keyboard a few times. So folks who are familiar with Katalon, can you give me a visual? walk us through like what it's like to create a test and maybe how studio assistant helps us then?

[00:17:05] Mush Honda The best way I would say is as a commenting process, when you have your script open. In the Studio, we have two modes. One is more of a non-technical perspective. It looks more like a traditional manual test case where you have the ability to have multiple test steps in simple English listed out. And we have what's called a script mode, which is basically more traditional of a test automation perspective, a test automation engineer's perspective of an IDE where you have your Xpath, so to speak, where you can put in your code. The way we've implemented studio assist is you need to be in the script view and in there, just start commenting out at the step level. What are the different things that you want to apply when you're creating a workflow? For example, if you are on an e-commerce site and you're looking to add something to your cart, for example. You could in very simple terms put in those steps. I typically use my Jira plug-in that I was talking about a few minutes earlier and plug back into the IDE. And then as part of the comments, I can highlight that and I can say, hey, generate code for this. What it actually does is using the context off Katalon studio and the built-in keyword, it actually uses Open AI and then it creates those steps as coded steps that are based in Groovy and that can be directly plugged in. And you can start using that as part of your test. How I was mentioning to you earlier that it's this is sort of like an alternative to a record and playback, a more mature of it. What you can do in this case is you could say, well, I am looking to test a system that may not be developed yet or a feature that's not in QA yet. Let me look at the wireframes. Let me look at the flaws. Let me look at what has been defined by the requirements. I can put that flaw in simple English. Get that preemptively converted to an automation test script. And then, use my prior knowledge or I could use what's already been defined by the developers to say, What are my objects that align with each of those instructions? In Selenium speak, if you were to interact with a particular object or dropped out, then what you could easily do is, is the actual keyword gets created for you in the script and then all you do is if you have an object repository that you've already saved, you can just simply map that to say, Hey, use the name that's type this. It may not be as clear as I'm speaking to it, but on our website, we've got certain examples and very simple setup instructions for it also that people could use and really do it. But what I realized is just from a tester's perspective, things that may have taken 30 minutes, 40 minutes for me to create after an automation framework is set up. This is done in 5 minutes or less. Just being able to do a lot more with the constraint of time that VS testers faced already, that's helps as a tremendous excellent. I've used it in that example.

[00:20:15] Joe Colantonio Nice. So it sounds like these keywords really can help you really help increase the understanding of test scripts, which a lot of people kind of ignore. If your tests aren't readable, the more readable they are, these are hard to maintain as well. It also sounds like once it generates like say, set a script for you that maybe does a log-in or anticipation does something like a big flow, you can automatically have it capsulate it for you as a keyword like Enter Patient, and then you can add as part of your test suite, I guess to make it more repeatable as usable keywords.

[00:20:46] Mush Honda It actually does a couple of things. It does the keyword aspect, but it also can also do your step and flows independently if you didn't want it. It all comes down to how you actually put the comments in there. And we've got examples on how best you can do that. But it lets you do both. In addition to that, something else that you mentioned, which we've applied also, is that's part of this code conversion, documentation, and commenting is an important part as well. So this actually does that for you as part of the code conversion process as well. In addition to that, one of the other challenges we face and this is I'm sure all of us face is when new members join the team and people who are familiar with the framework or are familiar with the context and domain of the system on the test. For whatever reason, we may not be as descriptive in commenting for the existing code. And what we've also done is we've used to do assist to give another feature to it as well. But you can highlight any piece of code within Studio and you could do the exact same thing where it says he explained code to me and what it actually does is it reverses the process where it looks it up and it tells you and explains to you again with commenting to say, what is that block of code doing for you? Is almost also reverse engineering to say, well, I'm trying to learn what is happening here. What are the tests that have been probably existing before I joined the team doing I can now highlight that, I can see the commenting and then add to the good practice of commenting code.

[00:22:19] Joe Colantonio Oh, you can almost use it for onboarding for new testers. Hey, go through this, use this tool, and have it read it to understand the code base before you dive in. That's awesome.

[00:22:28] Mush Honda Exactly. And in the process, make your code but your working it.

[00:22:33] Joe Colantonio Right. Love it. So you said this is in beta right now, correct?

[00:22:37] Mush Honda Yes.

[00:22:37] Joe Colantonio Have you got any feedback? Have you actually seen it in action and it's like, oh not only do we develop it with certain expectations, but now it's in the wild. It's actually fulfilling these expectations.

[00:22:48] Mush Honda We just launched this yesterday I believe. But it's beta as yesterday. One of the things that we're excited about is in my 20-year journey, quite honestly, I have not been a part of a team where we've built a product for the testers. Katalon is that one place where it's awesome. Like all of our team members, we are subject matter experts on day one. What we've been doing quite honestly, is testing and giving user feedback as part of our delivery as well. We have an initiative internally, we call it Katalon but VS testers say, why a feature should be developed? What kind of feature should we develop? Taking the feedback from our customers and saying, Hey, this makes sense, or look, as I'm going through my release or sprints, I'm giving real-time feedback to our product owners to say, Guys, this is not making sense. Let's tweak it a certain way. And honestly, for Studio Assist, the commenting piece was similarly developed. It was the next stage where we said, Hey, it's awesome that it can comment. Can we also make sure that we can reverse it to say, well, what happens when coding is done but no commenting is on it. Can we work backward from that?

[00:24:07] Joe Colantonio I always get asked this question from people too, does the vendor use their own tool to test their own code? But it sounds like you use Katalon to test Katalon, which is perfect.

[00:24:16] Mush Honda Absolutely. Katalon and then some. We do Katalon and then even our partner integration tools. We are making sure we're using all of those as well and testing around all of the capabilities. So API, mobile, etc. etc. doing all of that for sure.

[00:24:32] Joe Colantonio Nice. We spoke about generative AI copilot-type functionality. These are great new features. Another term I've been hearing a lot about is Autonomous Testing. Is Autonomous Testing different? How do you define what's autonomous testing and does Katalon have something to help with this area as well?

[00:24:48] Mush Honda Right. So in the general term identification. For me, Autonomous means hey, go ahead and create stuff without my input. I think in general in our industry that term is or inflated, to be honest, what it does mean for us at Katalon right, is I think again, the angle of an accelerator. We do have autonomous test generation capabilities. We are actively working on it. Again, that is in beta right now as well. It's a closed beta. If you've got a smaller group of our clients that we are partnering with and working through on it with them. But in essence, what that is doing is it's a capability where we are thinking again from a practitioner's perspective of what would make us be able to focus on the core value that us as human testers provide in the software development lifecycle. In that context, what we think about as Autonomous Test generation is the ability to look at real data as your system is being used in production or in an environment. Typically, I would say production is our go to. But looking at how your system is being used in production and then automatically creating automation scripts that reflect that usage. And that's part of that process also sort of exposes our two things. Improve and make your regression tests more realistic with areas that are being more frequently used, being dynamic enough to say every time the usage changes, your regression suite will automatically get updated with these new flows that are being used. But it also keeps you as the tester and the test team in charge. While it has the capability of creating all of that automatically, it will not do it without your input, your consent, or your criticism. We've got the ability to be able to say, hey, let's visualize and I know you talked about mind mapping previously as well. I think that's the go to for most of us testers. We've got something that we just released again yesterday in our latest beta release where we have these user journey maps, which is a visual diagram of saying here's how your traffic is flowing in the system on the test. And we've identified these different workflows that should be automated. Do you want us to go ahead and automate those? And you could say yes. Looking at that, yes, this makes sense. Or you could say, no, this is not what I think is being useful. Go ahead and regenerate. It will go through the process quickly over a period of time. That ranges from a few hours to a few days and then growing as more data becomes available. But it goes through that process and then it enables us to have these test cases automatically created for our consumption. And then we could say, Hey, I want to now use these in other environments that we are testing within and also sort of have that as a good sense or as a partner really to be able to say, well, now that regression testing is reflecting how the systems are being used, quite honestly, me as a tester can help my team focused on new feature development and new updates that are coming into the system and not have to do more of the mundane task of saying, okay, now that we're ready for a release, do I also have to go create and manage and upkeep my regression test? This is sort of like your coordinator, your fellow tester, or a tool in your testing process to be able to say, I already have a live upkeep of all of my regression that reflects how my system is being tested. If you've got that going as well.

[00:28:39] Joe Colantonio Wow, I really like this because it almost sounds like this is a very tester thing to risk. We always say to focus on the risk. This can help identify risk because once you're in production, regardless of where your assumptions were, or how you developed it, your customers may use it differently or you may not even know. Like we have 100,000 tests, but oh boy, only 90% of them are doing one thing and they're missing on the other 10%.

[00:29:02] Mush Honda Exactly.

[00:29:05] Joe Colantonio Nice. So you talk about shift and how Studio Assist can help you with shift left. Something like this can help you with shift right. Is this a feature autonomous? Is it called something ATG or what is this feature called in Katalon?

[00:29:17] Mush Honda Yes, We're calling it Autonomous Testing. Test Generation right now. I am not sure about the marketing aspect of it. Every time I hear terminology, my first aspect is tell me what autonomous really means. I mean, that was a little review earlier, is like autonomous in the broader sense means little magic. And I know it's not magic, it's just engineering. At least for now, we're calling it autonomous. I think we're going to stick with that for now.

[00:29:45] Joe Colantonio Great. Is anything that it can't help with, but are there any limitations that you see with? Well, you said, it's not magic. What may sound like magic so far, but is there anything, okay, no, I'll be upfront. It's not going to help you with these types of areas if that's what you're thinking.

[00:29:58] Mush Honda Yes. So just like everything we test, I would say it's not the magic bullet or solution that we are looking for that would say, hey, now we will guarantee that are no issues. No, absolutely not. That's the first limitation that I think really will go away. The second one, I think, is because it's in beta, quite honestly, I think what we're doing is we're still learning what are the different workflows and depending on the different complexities of different systems that are being built, we are still learning a lot of things. What other ways are there that we could streamline this recording and this creation process further? Right now, what we have is based on limited exposure to domains. We are looking at maybe, an e-commerce site versus a banking solution, for example. Let's hope we don't have insights yet into what are the different types of domains that this may be the most beneficial for. We're still learning and exploiting that. There's a limitation whereas more people sign up for it, it may not truly apply in their context or in the complexity of their system. And then, of course, we are still also looking what type of systems can they support. Is this more tied to just browser-based APIs, mobile, and stuff? And right now our scope is very limited. It's going to grow over time for sure. But right now we're focused only on like the Web UI aspect. It's focusing on functional UI tests and workflows for testers. That's the given area of at least the scope that's in play right now. And of course, we are building and trying to expand that out with APIs and things coming up next.

[00:31:42] Joe Colantonio Nice, I know we focused a lot on this on testers because this is based on testers, this podcast. However, I can see this almost helping the whole team. This is a big thing when I talk to other folks and experts need to get the whole team involved. But sometimes it's overhead. Like I don't understand Selenium even the developers or I don't know what I'm looking at from a business user. It seems like this could almost unify the team as well to help get everyone involved. Is that how you see it being used as well?

[00:32:07] Mush Honda Absolutely. Absolutely. And I think the one part that we talk about right is I think all of the features that we talked about today, while we are focusing on the core activity of test creation and execution. If you look at Katalon as a platform, we also have a lot of other components in that platform, which is the concept of testing and operations. We have a section that's called TestOps which is essentially being able to consolidate all of your reports, whether it's coming from TestNG or any developer-centric reports, but have a central location where as a team you can look at the overall release readiness as an example, what are the underlying issues that are still preventing you or giving you that gauge of saying, yes, we are ready as we inched towards our deadline of release. It's definitely built as a platform that enables the entire team collectively to get together and help build that confidence and quality, really continue to make quality as a team sport rather than the responsibility of a select few. And then in the same way, apart from like being able to schedule test and things of that nature, you have integrations into the entire lifecycle. You could go look at requirements, you could look at defects that allow you could see all of that as a holistic dashboard, so to speak, of saying where are we with our release? We've got integrations that go into operational environments. Browser stacks, tests lambda, and things of that nature are already seamlessly integrated. We have our own offering called Test Clouds as well. It's not only just focused on test creation, but it gives you sort of like that overall ability to take testing as a holistic platform and absorb it into your overall development ecosystem. It integrates with all of the tools that, quite honestly, teams of today are using. And it stems all the way from CI/CD tools, all the way to cloud execution. We are a platform that easily comes in and works cohesively in that ecosystem.

[00:34:16] Joe Colantonio Love it. And once again, I always do trends that I see for the upcoming year and for this year. I said more and more tools, becoming more platforms. And this sounds like that's the direction kind of what's going on as well.

[00:34:27] Mush Honda Yeah.

[00:34:27] Joe Colantonio I know we talked about a lot of things here and sometimes people need to see it to believe it. So you did mention a free trial, I believe also, is there an option for a demo? And can you talk a little bit more about the free trial, and how that works?

[00:34:38] Mush Honda Sure. Absolutely. We actually have for people who don't use it, we've got a 30-day all-inclusive trial available. As people download it, it starts with Studio, but then you get access to the cloud infrastructure with TestOps and everything being in the cloud. So 30-day window is what we give for people to try out all of the capabilities. And then of course, even if they want some demos and stuff, we've got a team to our website. People can reach out to us and we could schedule something. There are a lot of activity in the community as well as we've got a whole forum section as well where you've got active users constantly giving feedback, sharing information about how they are currently using the system, and discussing ideas, and alternatives. That's a very active community and quite honestly, the philosophy that we built Katalon with was for the testers, for the teams. It's exciting to see that that level of engagement and activity continues on. As people want to get more familiar with it and want to learn about it, I would say definitely head on to our website and it's clear links, but you could try it out.

[00:35:43] Joe Colantonio Absolutely. I also have links to the free trial in the demo and a bunch of other information in the Katalon in the show notes. So, you definitely want to check that out. Okay. Mush, before we go, is there one piece of actual advice you can give to someone to help them with their AI automation testing efforts? And what's the best way to find or contact you?

[00:35:59] Mush Honda Sure. The one piece of actionable advice I always talk about is, to embrace change, don't be afraid of it. View as all changes as an opportunity to learn. I think being of a mindset that we say we continuously learn I think goes a long way. The other advice I would also give is definitely, look at AI as a pro, be skeptical, but definitely use it to streamline and make your work more efficient because as everybody knows, testers are definitely put under a lot of pressure all the time. Anything that we can do to offset that. Definitely embrace it and try it out. The best way to reach me, I'm on LinkedIn. I think that would be my go-to recommendation for people to reach out. And yeah, if we need to talk about any of the items or the capabilities we've discussed today, I'm more than happy to jump on a call and talk to people.

[00:36:52] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a453. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:37:27] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Playwright Vs JMeter, 3 Automation Questions You need To Know and More TGNS119

Posted on 05/07/2024

About This Episode: Do you know the three pivotal questions to ask when ...

Tobias Müller TestGuild Automation Feature

Debunk Autonomous Software Testing Myths with Tobias Müller

Posted on 05/05/2024

About This Episode: In today's episode, we're thrilled to have Tobias Müller, a ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

AI Joe Bot, AI in Mobile, Testing AI and More TGNS118

Posted on 04/29/2024

About This Episode: AI Joe Bot, AI in Mobile, Testing AI and More ...