Combining AI and Playwright using Autify Genesis with Ryo Chikazawa

By Test Guild
  • Share:
Join the Guild for FREE
Ryo Chikazawa TestGuild Automation Feature

About This Episode:

Want to know more about the Power of AI to enhance Playwright Scripts

Today, our expert guest is Ryo Chikazawa, the visionary behind the AI automation testing solution Autify. In this episode, we'll dive deep into the innovative AI-driven solutions that Autify is bringing to the software testing world. We'll explore how their integration with Playwright makes Uber flexible for both technical and non-technical users.

Ryo will enlighten us on their journey from a no-code solution to their latest product, Autify Genesys, which harnesses GenAI to fill the gaps in the testing lifecycle. We'll delve into the robust features of their AI agent, which can autonomously generate Playwright scripts, interact with applications, and adapt to changing requirements, all while ensuring users steer clear of vendor lock-in.

Join us as we uncover how Autify is making significant progress with their local and cloud-based solutions, the pivotal role of AI in transforming software development, and the distinctive features that set their offerings apart in this rapidly evolving space. This episode is a must-listen, packed with exciting revelations and practical insights for anyone involved in QA and software testing.

To see this in action, check the Webinar Replay of Ryo's session with the Guild to know how this works. You can find that link down below.

webinar-autify-genesis-partner-with-genai-to-build-test-cases-and-test-scripts

About Ryo Chikazawa

Ryo Chikazawa

Ryo Chikazawa is the Co-Founder and CEO of Autify, an AI platform for software quality engineering. Autify graduated from Alchemist Accelerator, the top B2B startup accelerator in the US. Prior to Autify, he had more than 10 years of software engineering experience in Japan, Singapore, and the US (San Francisco). During that time, he realized that software testing is a huge common problem across the globe, which led to the inception of Autify.

Connect with Ryo Chikazawa

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

tgaRyoCombiningAIandPlaywrightusingAutifyGenesis517.mp3

[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:00:35] Joe Colantonio Want to know more about the power of A.I. Enhanced Playwright scripts? Well, you're in for a treat because today our expert guest is real. The visionary behind the AI automation testing solution Autify Genesis. So in this episode, we'll dive deep into the innovative AI-driven solution that Autify is bringing to the world of software testing. We explore how their integration with Playwright makes it Uber flexible for both technical and non-technical users. We will share insights on his journey from a No code solution to the latest product, which leverages Gen AI to bridge the gap in the testing lifecycle. We'll also discuss the powerful future of the AI agent, which can generate Playwright scripts, interact with applications autonomously, and adapt to changing requirements with ease, all while ensuring user avoids a lock in with a vendor. We'll discuss the powerful features of their A.I agent which can generate Playwright scripts, interact with applications autonomously and adapt to changing requirements with ease, all while ensuring the user avoids a vendor lock in. Tune in as we explore how Autify is making strides with local and cloud-based solutions in the importance of AI in helping with software development and testing, and also what sets their offering apart in this ever growing space. Don't miss this episode packed with awesome revelations of practical insights for anyone involved in QA and software testing. You don't want to miss it. Check it out. And to actually see this all in action, be sure to check out the webinar replay of the session Ryo did with us a few weeks ago on our Test Guild webinar series and how this all works. And you can see it for yourself. So don't miss it. You can check that out using the link down below.

[00:02:20] Joe Colantonio Hey Ryo, welcome to the Guild.

[00:02:25] Ryo Chikazawa Hi, Joe.

[00:02:26] Joe Colantonio Great to have you. Really excited by this. I'm always excited when I learn about new solutions, especially around AI. So the first question I always ask founders is probably the first thing is how did you get into AI? First off, and then we'll dive into what the solution is.

[00:02:39] Ryo Chikazawa Yeah, sure. So Autify actually has started since 2019. So we've been investing into the A.I since the beginning of the company. The first solution we built at our original flagship product is the Autify No Code, which is data record and playback data, no code, low code solution so that anyone can automate then AI to maintain the scenario you created by the record and playback. So yeah, based on the UI changes, so that's how we get into the A.I in the first place. So like back in the original product we employed the classical machine learning technique to identify the same element in the page even though the UI has been changed. Like how we did that was like collecting the so called features of the element, like IDs and class name like positions and colors and text, parent element, siblings and all that. And then we calculated similarly score between those two screens like a different screens and then pick the other code sediment from there. So that was add-app like how we started in terms of the AI. Then from there we eventually gradually evolved our A.I. technologies. Now we implemented the kind of image recognition to identify the element that is actually applied for our mobile solution. So like mobile, one of the biggest challenges finding the ...., the right element as well. But in the mobile, sometimes it's difficult to access to other source code. That's why we use the image recognition technique to identify the element in the mobile screen. We have the visual recognition kind of like a machine learning engine there. Then, we invested a lot areas in the AI Then now AI comes. We decided to launch a new product, which is the Autify Genesis. Autify Genesis takes the earlier part of the software testing journey. So which does the test case creation as where as the test script generation by understanding the requirement. So requirement can be like any things like a PDF, word, or spreadsheet, figma, or Jira ticket. Whatever like you write as a requirement can be taken by the Genesis, Gen AI engine that can create the test cases with the wide range of coverage and also we build the like AI agent that actually accesses to your application to like generate like interact with your application and then generate the Playwright script in real time. One of the biggest challenges of generating the test case or like I would say Playwright script by just making use of the ChatGPT or like just other LLM, is it can create a very nice little kind of Playwright script. It looks works but it doesn't because it doesn't know the structure of your application. So what we did is implemented the AI agent to actually interact with your site and then it analyzes the data structure of the pages and then the accurate element locator and so on. With that, we now can create a very, very accurate Playwright script because actually interacted with that application. And then we actually run the generated Playwright script at the same time. The outcome is actually very active accurate. It runs. Anyway, so that is kind of like a long story short, this is our kind of AI journey since the beginning.

[00:06:38] Joe Colantonio Interesting. I want to dive in a little bit more on how this all works. But the first thing is, you already had a solution in 2019 before. I mean, that sort of about when Jen and I started to take off. But obviously, you were developing before then. How did you know I was going to be as big as it was going to be? How did you see that then? Because it seemed like you got in right before everyone else jumped into slap on AI. You started AI first way before other companies started coming into this space.

[00:07:02] Ryo Chikazawa Yeah. So actually like a since the beginning, we truly believe in AI. We think like AI is going to be definitely revolutionize software testing as well as like software development itself. Actually, so at the time of starting Autify, there's some reports I think like the report is like from Gartner, the Gartner report mentions that, I forget a detail about maybe we can paste it on somewhere and a note maybe after that, but.

[00:07:35] Joe Colantonio Sure, yeah, we'll have in the comments below.

[00:07:38] Ryo Chikazawa Yeah. So it says like in a few years, maybe like 27 or 25 or something like a lot of companies are going to employ AI engineers. AI engineering is like AI's, right? In the software development teams. When I read that, I thought like, this should be the future, right? So the AI should definitely evolve in the next 5 to 10 years, then that's going to definitely increase the productivity. And then, I think it's not going to replace our job. It's more like helping us to like make us more productive. That was the kind of the vision, like our vision from the beginning. We need like a truly believe in like AI is going to definitely eventually revolutionize the testing and in software development, then actually, our first idea is something Autify Genesis. Like we kind of tried to make general idea a test case from the BDD then Gherkin but we actually fail because it was too early. Like the technology was not really there. That's why we decided to pivot it into like a record and playback type of classical approach class AI. So that was more realistic solution back then. So yeah, that's why, still AI. Then the more we invest into the AI, the more we believe that Oh, AI now can capable of understanding the element from the other screenshot or can maintain the scripting the scenario by the UI changes. The more we do, the more we believe. Then, now finally Gen AI comes and then I was like, this is definitely the future we envisioned from the beginning.

[00:09:30] Joe Colantonio I'm just a little confused, the original solution you seem like it was using image based A.I. machine learning to figure out everything in a no coded way. And that to me seems like it would be, especially now, like multimodal AI the way forward. So why create a new you make a clear distinction now between that in Autify Genesis. Why is that?

[00:09:51] Ryo Chikazawa Yeah. The first of all for the original product employed the image resolution techniques as whereas the to classical I mean this is classical ..... the machine learning approach. By the way, we also like use the HTML like you know both sides like HTML to extract the features then also understand the image to like so we employ those two techniques, use those together, use both approach to identify the element. But yeah, to answer to your question. Yeah. So why we start it, right? Because like it is a few reasons. So first of all, like what no code, low code product, what our original products can solve? Is kind of limited in the testing lifecycle. So it does the kind of later part of the testing cycle, like if you know what to automate, you can easily use record and playback to start creating the automated scenario. We've been supporting a lot of different types of customers. And then what we found is lots of people is like spending their time on like creating a test case is whereas designed the automated test case and so on. I do see the larger problem is actually outside of that scope that we've been solving. So like what I realize is like we need to capture them, we need to support, we need to help the customers to solve that earlier part of this problem as well, so that we can provide a whole value chain for the entire testing lifecycle. That's why we started thinking about like started brainstorming, like how we can solve earlier stage challenges like the test case creation and so on. Then, LLM comes and then I was like, yeah, this is actually like the very good solution, the problem to be solved by Gen AI, understand the requirement and then generating the test cases. Whereas, generating the test scopes. This is actually what the Gen AI is really, really good at. The pieces get together for me, let's employ Gen AI to tackle this like test case creation problem then build a whole platform to cover the entire problems in the testing lifecycle.

[00:12:20] Joe Colantonio Awesome. How does a Autify Genesis work? So it sounds like I can feed sigma file, I could feed requirement file, someone else came like all these files. It's almost like I'm training our my own context based LLM that it then will understand what it needs to be done to test it.

[00:12:37] Ryo Chikazawa That's right. That's right. It's very important to like let the agent AI understand the context to your application. Not only just put this requirement, but also if we have any other like a supplement or documentation or like a supplemental information, you should definitely feed it. What we did for our internal use case was like a not only just put the our PRD also like put the our entire help center documentation. Now I have help center documentation has all the information about the our applications behavior. So like who is that if we have that kind of like help doc or user manual or something like that. That is going to be also a great input. You put all that then our AI tried to create a test case is based on that like a specific requirement. That's basically how it works. And first of all, general idea, the list of the test cases and then you review it. Then after that you generally see the Gherkin scenarios. We decided to go with the Gherkin format because the Gherkin format is kind of widely used. Then it's well known format. Then how I like Gherkin? Gherkin is very abstract. You need to write Gherkin scenario in abstract way, right? So you basically don't really put the at a lot of like granular information like how you log in or what kind of credential is going to be used or something like that. Because one of the biggest challenge of the end to end task is the maintainability. Right. The more granular you go, the more difficult the maintenance becomes. That's why like we decided to go with Gherkin. In terms of maintainability, it so like kind of higher layer, right? Then to let you regenerate the test script if something changes on the UI side. One of the biggest challenges of the end to end, every time the UI changes or the page changes or some requirement changes like although the whole, the higher level operation will be changed like all of the detail can be changed. Why we decided to go with Gherkin? Is Gherkin can be maintaining the higher layer the abstract way? There you can regenerate the Playwright script like when some detail changes. That's basically how we create that test case is first of all and then that latter part is going to be the Playwright script generation. But yeah, I think I can pause here if you have any questions I can answer.

[00:15:22] Joe Colantonio Yeah. So the requirement does change. How does it know? Is it plugged into to your code based? Is there a process that developers need to start feeding in what they're going to do for the sprint is something like how does a know a change in a really did change?

[00:15:36] Ryo Chikazawa Yeah. So like our approach is not to access to your code repository because like we go with the black box testing, we basically rely on the requirement from you. Whenever the requirement like the spec is changed or you upload the new requirement. If the figma has changed like you know, we are now building the integration to Figma. When something changed, we can automatically feed it to the our model to regenerate the test cases. And also we build the integration to Jira. When like a new requirement is created or the Jira ticket has been updated and we can automatically feed it, right? Anyways, give a recommend again so that we can understand the changes, we can understand the changes. We can regenerate the idea that test cases that will be affected.

[00:16:34] Joe Colantonio And by doing so it regenerates the test. And so developers need to I need to change this id in this function or there's a new page with this flow, it automatically would know that and update all the existing test?

[00:16:47] Ryo Chikazawa That's right. That's right. What we generate is like as I mentioned, it's a higher layer of the Gherkin scenario. Then with Gherkin definition, our AI agent accesses the actually interact with your application then by following the regarding scenarios. If the Gherkin scenario says like log in to the system, then the agent try to figure out a way to log into your site. So you don't really need to put a lot of details in Gherkin because the AI agent can figure out. Yes, while you can maintain to abstract way in the Gherkin scenario, the AI agent can't figure out the detail. You don't really need to do a lot of like detail to ground up like time consuming work. AI to figure out. Then you can also breaks the that Gherkin step down into the other very actionable executable steps while interacting with your application and then figure out the operation. Then yeah create the Playwright script. Then what you need to do is like a reviewing the AI agent does. It's kind of autopilot. When you start a session like the session kicks in. Then the AI agent started accessing interacting with your site and then try to figure out a way to log in. Then while you do that, it also creates to Playwright script in the sprint, right? Then you can just monitor like how our like AI agent figure out the way to log in. Then if something goes wrong and you can just pause the session, then, give the agent another like input to fix the operation. It's very interactive. We have the chat window on the left side of application where you can communicate with the AI agent. You can tell sometimes the AI agent is going to ask you how to log in, what's the other user's credential or I couldn't actually pick the right element from the screen. Could you like specify on behalf of me? AI is actually going to ask you. You give their inputs to their agents so that you can fill in the gap then create the accurate Playwright scripts. So that's how it works.

[00:19:09] Joe Colantonio Gotcha. You mentioned Playwright a bunch of times. So you start off as a no code company. Why are you generating Playwright code then?

[00:19:16] Ryo Chikazawa Yeah.

[00:19:18] Joe Colantonio How much does people need to know about coding? Why are you using this approach?

[00:19:21] Ryo Chikazawa Yeah, a great question. Thanks a lot. So first of all, I was pretty amazed by Playwright. I mean, I used to be the software developer by myself, so I write Playwright script by myself too. And I was pretty amazed. Playwright is like very amazing comparing to the like in the other old solutions. Yeah, it's pretty fast, very light. And then the other lots of great element locators and so on and so forth. Very stable. It can automatically wait until the element get loaded so on and so forth. The Playwright is like a very good end like now I think like everyone is like really interested in the Playwright and then, the community has grown a lot. That's why Playwright comes. And then why we are doing this right as a no code low code company? Because like one of the biggest challenge I've been seeing is like, there's a two types of audience. The engineers like a very technical people who writes the script by themselves. The other side is like let's say manual testers or the QA managers who doesn't really write direct code. I do see that kind of the gap between those two audiences and there's no real solution to filling the gap. While no code works really well for the people who don't really write code by themselves. But I do see lots of people writing like Selenium or Cypress to do the automation as well. We've been kind of thinking about how we can help that kind of the technical side of audiences as well. Those people like think no code low code is not their thing because they can code then if you go with like a no code low code product then sometimes you can export scripts or you would lose the flexibility. I was thinking of like how we can kind of bridge the gap between those two audiences and then also provide that right solution to the technical audience as well. Now we build the Autify Genesis, which can definitely increase the productivity of those people who write the end to end script by themselves while giving them the freedom and flexibility of the solution. That's why then what we do is if you're not very technical writing the script. You can still use the no code by connecting with Genesis. So Genesis and Autify no code is integrated while genesis Gen AI Playwright script. No code can convert the Playwright script into the no code scenario. Our new version of No code is going to be based on top of Playwright. If you are technical, you don't want that idea lot of that script like for the very basic operations as well. In that case you can just use the low code no code solution then, maintaining the easy way by if you go is like the a very technical conflict at this flow then you can still maintain the Playwright on the back end. So Anyways, that's why we decided to support the Playwright.

[00:23:05] Joe Colantonio Does that also mean then because it's generating Playwright that it's not vendor locked? I mean, obviously if someone uses it without your tool they won't get all the AI, but with it they at least build the rerun the scripts if it doesn't change our intent like that?

[00:23:18] Ryo Chikazawa That's right. What we do for Autify Genesis is completely event of like a flee from the vendor lock in because we generate the Playwright script so you can do whatever you want. You can just use the Playwright script. You can run this anywhere you want. We are completely free from the other vendor locked in.

[00:23:40] Joe Colantonio Now, you mentioned agent a few times. When you're running your test, are you able to run in the cloud? Do you have to have an agent in all of the machines that you run on. How does that work?

[00:23:48] Ryo Chikazawa Yeah. Autify Genesis is the form of the desktop app. You install the desktop app. This is also like the other side of direction from the other cloud version, right? So like the reason why we decided to make this as a desktop is like you can help your left shift left. You might want to test against the local host and something that you are developing. I think that should be in the form of desktop app. So anyways that is a desktop app. Then again is going to access to your either of your local host staging or production whatever, you specify. The AI model itself is on the cloud. If you have data your internal AI model of course, we are like AI model agnostic so you can specify your internal model as well. But yeah, that is on the cloud. The Agent works inside of your application or on your desktop app.

[00:24:51] Joe Colantonio Gotcha. Do you have a VScode plugin? Is that different than the desktop?

[00:24:54] Ryo Chikazawa Yes, we have just implemented the VScode plugin so that inside of a VScode plugin, it does the same thing. Then it generates the a Playwright code to your VScode then you know where you can edit from there.

[00:25:14] Joe Colantonio There are a lot of players coming out in this exact space. Obviously, you had a head start. How would you say that your solutions may be different? Not to say all the solutions are bad, but is there like a key differentiator that like someone's listen and they meet these criteria, like this is the tool for you or maybe it's not the right tool for you?

[00:25:32] Ryo Chikazawa Yeah. So I think there's a two aspect. First of all, we generate the Playwright script. Some what of the Playwright is kind of like no code low code pull as the scenario generation, meaning that, you still kind of like a go with the vendor lock in situation and then you can not have the flexibility of the solution. First of all, it's very flexible. You generate the Playwright script. I think that is one of the uniqueness part. And the other thing is that we provide that end to end solution from the test script creation to the execution and the maintenance. If you are interested in that part, you can also employ that solution too. I think we are pretty unique on like providing the whole end to end support for the entire testing lifecycle. Then no code part is also unique, it's our new version is a desktop app based on top of Playwright. Then you can still access to the Playwright. Like a no code scenario is actually essentially same Playwright scripts. I think that part is also pretty unique.

[00:26:42] Joe Colantonio Nice. And if people are listening, they're trying to visualize a don't worry them have a link down below where you can see a replay of a webinar we did on Autify Genesis partnering with Gen AI to build test cases in Playwright scripts. It's a must watch webinar where you could see this all in action and you'll find a link for it down below. Alright Ryo, before we go, is there one piece of actionable advice you can give to someone to help them with their AI automation testing efforts? And what's the best way to find or contact you all learn more about Autofy Genesis?

[00:27:10] Ryo Chikazawa Well, yeah, you can access to our website and down like it contact us or like you can sign up for data better access for Autify Genesis from the website.

[00:27:18] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a517. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:27:52] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.

[00:28:36] TOh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Itamar Friedman TestGuild DevOps Toolchain

QE AI Code Quality in DevOps with Itamar Friedman

Posted on 10/09/2024

About this DevOps Toolchain Episode: Welcome to another exciting episode of the DevOps ...

Mike Verinder TestGuild Automation Feature

Community Dynamics in Automation Testing with Mike Verinder

Posted on 10/06/2024

About This Episode: In today's episode, we're thrilled to host Mike Verinder, a ...

Dagna Bieda TestGuild DevOps Toolchain

Brain Refactor: Optimize Your Internal Code To Thrive in Tech & Engineering with Dagna Bieda

Posted on 10/02/2024

About this DevOps Toolchain Episode: Welcome to another fantastic session of the DevOps ...