About This Episode:
If you'd like to accelerate your test execution & maintenance speed — and massively reduce flakiness, I've got some excellent news for you.
On this episode of TestGuild Automation Podcast, Adam Carmi Co-founder and CTO at Applitools, joins us to discuss new automation testing self-healing technology that is a giant leap forward in test infrastructure innovation. Adam goes over how this new cloud-based testing platform replaces legacy testing grids — easily augmenting open-source test frameworks with AI capabilities (like self-healing locators). Discover how you and your teams can intelligently heal broken tests as they run — drastically reducing flakiness and execution time. Lastly, the episode touches on the rise of AI in the testing industry and how testers and people in the testing community have a bright future. Tune in for a plethora of insights and tips on test automation.
Discover TestGuild – a vibrant community of over 34,000 of the world's most innovative and dedicated Automation testers. This dynamic collective is at the forefront of the industry, curating and sharing the most effective tools, cutting-edge software, profound knowledge, and unparalleled services specifically for test automation.
We believe in collaboration and value the power of collective knowledge. If you're as passionate about automation testing as we are and have a solution, tool, or service that can enhance the skills of our members or address a critical problem, we want to hear from you.
Take the first step towards transforming your and our community's future. Check out our done-for-you services awareness and lead generation demand packages, and let's explore the awesome possibilities together.
About Adam Carmi
Adam is Co-founder and CTO at Applitools and the inventor of Visual Testing, bringing over 20 years of technological leadership and innovation experience to Applitools. Prior to founding Applitools, he held management, research, and development positions at Safend, IBM, Microsoft, and Intel Corporation.
Connect with Adam Carmi
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.
[00:00:25] Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. Today, we have with us once again Adam Carmi. He is the creator and founder of Applitools. So today, we're going to learn all about a revolutionary new way to run your tests faster and more reliably in a self-healing type cloud using your existing open-source test. If you don't, Adam is the co-founder and CTO at Applitools, he's also the inventor of visual testing, bringing over 20 years of experience from a bunch of different leadership, initiatives, and experience from Applitools. Prior to founding Applitools, he held management positions at companies like IBM, Microsoft, Intel, and a bunch of others ones. Really excited to have him back on the show. You don't want to miss it. Check it out.
[00:01:09] This episode of the Test Guild Automation Podcast is sponsored by the Test Guild. Test Guild offers amazing partnership plans that cater to your brand awareness, lead generation, and thought leadership goals to get your products and services in front of your ideal target audience. Our satisfied clients rave about the results they've seen from partnering with us from boosted event attendance to impressive ROI. Visit our website and let's talk about how Test Guild could take your brand to the next level. Head on over to TestGuild.info and let's talk.
[00:01:41] Joe Colantonio Hey, Adam, welcome back to the Guild.
[00:01:47] Adam Carmi Hey, Joe, it's great to be back.
[00:01:51] Joe Colantonio Yeah, absolutely. It's been years. I don't know if you know how long. I think the last time you were on episode 43, in 2015, so it's been like.
[00:01:59] Adam Carmi Wow.
[00:02:00] Joe Colantonio Eight years. Yeah, it's been crazy.
[00:02:03] Adam Carmi Time flies. You haven't changed at all.
[00:02:08] Joe Colantonio More white and grays, for sure. You look the same. How nice.
[00:02:12] Adam Carmi Thank you.
[00:02:13] Joe Colantonio Yeah. So, the last time you were on the show you introduced, I think was the first time new technology called visual testing. And today, I think you are going to introduce something that it's also the first of its kind, something called the Self-healing Cloud. But before we get into it, for people that may have missed that episode back in 2015, what is Applitools?
[00:02:32] Adam Carmi Yeah. Applitools provide the test automation platform that, today is used by hundreds of the world's leading companies. What's special about Applitools is basically two technologies that we've developed, disruptive technologies that are unique to us. The first of them is Visual AI which is basically a large collection of computer vision algorithms that mimic the human vision with very high accuracy. And it allows us to take a very unique approach to test the automation, basically inventing visual testing, but actually proposing a new way of implementing your end-to-end test, component testing, and anything that has to do with testing through the client, anything that tests the UI, which eventually allows you to drastically increase your test coverage while reducing the amount of test code that you need to write by 80%, and requiring a fraction of the maintenance effort that you would normally require to maintain your automated test suites and also opening it up for everyone in the team to work on so they can have any manual test or business analyst. Every member of the team that doesn't have developer skills, can take complete and full membership and ownership on maintaining the automated test. That's the revolution around visual AI. The second invention is the ultra-fast grade and we came out around 2018, which is basically the best way to test across devices and browsers. So it allows you to execute your Visual AI-based test across different devices and browsers to get this coverage in orders of magnitude faster and more reliably and more securely than you can with any other device. This is the big like two technologies that we've had in the execution cloud. The Self-healing execution cloud that you mentioned is the latest and huge addition, a recent addition. Yeah.
[00:04:29] Joe Colantonio Absolutely. So I want to dive into that. I'm already off-topic. I just want to say, since I learned about Applitools in 2015, I bring this up as an example of how I see AI progressing in the future. So I just want to get your input on this. So with Applitools, what it does, it bubbles up insight and requires a person to say, is this correct? There was a change, but it requires a person to look at whatever the AI algorithm has bubbled up as an issue, and that someone needs to go in and say yes or no. I see that as where we go in the future with AI, where other vendors are going towards. Do you think that's the correct implementation of the AI solutions rather than just relying on the algorithm to do it? It's going to require a human being, as always, to make sure it's valid, whatever it's prompting you for.
[00:05:11] Adam Carmi Yeah, I think that at the end of the day, I think that first of all, we see people every day arguing about what is right and wrong. I mean, I see it between my engineers and engineers and the product team. One guy likes it, the other one doesn't like it. One guy thinks that it's great, the other doesn't. Who says that the machine can really decide here? That's my first question. So even if we could implement it which we can. Who says that the machine can decide? The second thing is I think that today, especially if you want to sell a product that people will buy, people are very, very wary of allowing some obscure AI to make decisions. When it comes to testing, people want certainty. They want to know what are the coverages that they're getting. What are the risks that they're taking? Okay. And if you want to talk about the future, when I look on at ChatGPT and Large Language Models and Generative AI and all this stuff that came into our lives in the past months, people are all of a sudden afraid of how it's going to change the world so not to get into of where it really is and where the hype is today, which is a completely different thing. And we need to give it time and see how it will evolve and in which areas it will really have a big impact and where it won't have any impact. Let's assume the worst. Let's assume that AI will start building applications for us. This would be the best thing that would ever happen to the testing world. I mean, the better this thing is, the less qualified the engineers will be, the less ability to understand what is being built, how it is being built. Today, when you have the new feature, you have a developer that knows what is doing as a software engineer. He knows what to change. It can come back to you and say, I changed this, it's safe. This is the impact. This is what you need to cover in your test. Imagine working with a tool that when you asking the same question twice, it gives you a different answer. And the guy that is using it is not a good engineer. So you can't really understand what happened or influence it, right? So the world of testing is gonna be much more important. And the need for testing tools want to be rely on another AI to tell you if the other AI did well, it would be a true regression-based test. You would like to know. Okay, I know how it worked before. I want to know everything that changed from how it used to behave before because it won't count on the days of changing. Okay. I think that the future is very bright for us testers and us working in the testing community. And to go back to your initial question, I think people want to be involved. I think they don't want to do repetitive, boring, mindless work. They want to use their intellect and understanding of the customer, the expected user experience, a bit of knowledge of the product, and what they intended to happen and be able in the most efficient way to make sure that it works that way. And the most efficient way is for them to just look at changes and say, Yes, this is okay or no, this is about.
[00:08:31] Joe Colantonio I love it. Great insight. And you mentioned a lot of people are afraid that's going to replace. Well, you kind of implies a lot of people think that AI is going to replace testers. But once again, I always go to Applitools and said, I used to work for a company that used to compare X-rays to one another and it was very difficult to do manually. We brought Applitools in and it was able to spot differences I don't think anyone could do manually. I asked how many testers people think that replaced. It was zero. So it's the same exact thing I see in the future. So absolutely.
[00:09:01] Adam Carmi Look, in the end of the day, this is a productivity tool that you still need the skills to evaluate what you get and ask for corrections. And you still need those skills. And when that spreadsheet was invented, people thought that accountants and bookkeepers won't have a job anymore. The exact opposite happened. So maybe there'll be many more jobs open because more people will be able to apply to them, but the skills would still be required. This is how I see it at least.
[00:09:32] Joe Colantonio Very nice. All right, so besides Applitools eyes, which does the visual validation piece, you mentioned the ultrafast grid, but with the latest technology, it's the execution cloud. All this built up on another other? Are there layers on top of each other? Or all different products that don't relate to one another?
[00:09:49] Adam Carmi They own different products. Let's talk about the execution cloud. The execution cloud is completely independent of the other product, so you can use it today. I mean, if you need to run your test to scale, okay. You definitely don't want to build it by yourself to produce this scale for a variety of reasons that can extend on. But if you need to run it at scale, you can just run your tests on the execution cloud regardless if they're using Eyes for visual validation or not. So even if you have just Selenium, and WebdriverIO tests that just have plain traditional assertions in them, not depending on any other Eyes product, you can use them to be our infrastructure to execute them and scale without upsides to it. So it would be the fastest solution by far. The cheapest solution by far and you get self-healing out of the box without changing a single line of code in your test, which is then super awesome.
[00:10:51] Joe Colantonio Yeah. Let's dive into a little bit more than I think. The better question I should tell us is what is the execution cloud at a high level? It seems like it's something that you can use your existing test or open source test with that you just run it in the cloud and it automatically is able to then self-heal itself. If it can't find an element or a visual identification, am I understand that correctly?
[00:11:12] Adam Carmi Yeah. So it has nothing to do with Visual or with Eyes or validation. What it does is basically it self heals your test, your open-source test by itself. And let's talk about if you want like an algorithm, what self-healing for the people in the audience that last spoke.
[00:11:29] Joe Colantonio Yeah, that'd be great.
[00:11:31] Adam Carmi What it is. If we look at any of our tests, each step of the test interacts with some element in the UI, either finding an element in order to click it or type into in type some text into it, providing some inputs, or we find that just to go over a query, the property is in a certain right. It happens on every test either interaction step or validation step. Now in the UI for the current page of the application, there are hundreds if not thousands of elements. We need a way to find the elements that we want to interact with. And in order to do that, we need to execute some queries, queries that basically find the elements for us and we call these queries element locators, right? I can look for example, a link under an element that has an attribute and idea attribute with a certain time. Now the problem happened and this is how tests would be. The problem is that if the UI changes because we are adding a feature or some developer decides to refactor it because it thinks it could be built in a better way, the structure of the UI changes and that can cause some of the queries in our test go to fail. It's still the same query, but it fails to find the element that we are looking for and this causes our test to crash. Okay, the dreaded no such element exception in our test. So the worst thing about this is that these failures always happen at the worst possible time, right? You're just about to release and you want to run the test, then release fast or a developer is now pushing a code change and he wants to see the results and move on to his next task. And all of a sudden boom, dozens of tests fail and there's absolutely no choice, you have to stop whatever you had planned to do at that time. You have to put it aside and start going over the test results and figuring out what went wrong and going into the tests and fix them and rerun the thing, hoping that this time will pass and it can waste hours, right just in that work. And it's the worst point in time where you want to do it. So basically what some failing does is that when you are trying to find an element and the element time no longer works, self-healing is still able to find that element on the page for you and allow your test to continue passing. This is the bottom line. It means that even when the UI changes, your test will continue to pass, and you can then later on when it's convenient for you to go and update the tests and correct them. Although, because self-healing is so robust, at least ours is. And we can continue and find the element for you forever, even if the UI continues to change again and again and again, you never have to go back and fix your test. So basically, it's enough for you to find and have a single run of the test on our infrastructure to find your element once. And this allows us to be able to continue and find it again and again and again, no matter the fact that the locator is no longer relevant to how the page is implemented. Okay. Now just to give an analogy, just like when the test fails, you as a human being is capable of looking at the UI, and finding that element. Oh, here it is. Let's look at its attributes and fix the test so we can find it again. This is exactly what self-healing does. The test gets corrected and you don't need to do it yourself. Okay, the test code remains the same, but on our infrastructure, we are still able to find it. I can go around into how that works if it's interesting, but that's the bottom line of the value. And again, because when that happens, it means that your entire test suite becomes much more stable, your builds become much more stable, you saved hours in looking at failed test results and analyzing them, and hours in going into test code and fixing it again and again and again, after each UI changed. So the impact on the stability and the maintainability is immense, and this impacts your ability to succeed with your test automation project because you have more time to continue and keep up with coverage. And you would spend less time rerunning tests and dealing with flakiness and breaking tests and more time to enjoy getting that coverage as soon as possible.
[00:16:07] Joe Colantonio And I think people could definitely know the value of hearing this. But, I guess the question is, you said you can dive into how it works. I know there are two ways. I know some vendors work already. They use an image to find a location based on a baseline image of something to know where the element is most likely located. And the other one is it'll use like backup methods like it'll have a history of here's the XPath, here's the CSS, here's the index, so it will fail over to all those the ones that have and it's a memory from the past runs. So if the first preferred way fails, it will fell down to the other ones. Is this the same thing? Is this different?
[00:16:43] Adam Carmi So we've developed algorithms for that. By the way, we did the second approach back when we were maintaining Selenium IDE. We built in the alternative locator's method with a fallback, which was a very naive approach to it. We've actually built a UI engine to deal with self-healing in the execution cloud. It works amazingly well. Very difficult to even if you try very hard to make it fail, it doesn't fail. And we actually compared it with everything else that's out there and it's much better. But everyone needs to try from self and reach that conclusion. But here's how it works. Basically, every time we find an element, if you're using a certain locator to find an element, every time we find it, we capture hundreds of data points about that element, all its attributes, the location in the hierarchy, and details about its ancestors and neighbor elements. Basically, everything that you can think of, including an image of the element. We start all that data in the database using the locator that you use to find the element is key, and every time the test runs and we rediscover the element, we learn everything about it from scratch. Okay, so it's up to date with the changes in the element. Even if the element is still locatable using your locator, it might change in various ways. We keep track of all those changes all the time. Now let's say the UI changed and now you're using the same locator that is now broken to find the element. We try to find the element with your locator. We failed to do it. Again, now, instead of just returning an error that would fail your test, which is the usual thing that would happen, we actually take the locator, go to the database and extract this wealth of information about everything we know about that element. Now we are running our own proprietary algorithms to use all that data to be able to locate that element. And we have a huge amount of data. Okay. So we search for it using our algorithms and our methods, using all the data that we have on that element. And if we manage to find it, we relearned everything about it, starting the database under the same broken locator, the same locator that no longer works. We update that entry, we indicate in our dashboard and alternative locator, strong locator that you could use to go ahead and update your test if you want to. We indicated there was a self-healing event, so you can go back and track all these events if you want to and fix your test. And then we just return the element that we found to your test. So as far as your test is concerned, nothing happened. You looked for an element. You found it. Because we learned everything from scratch about that element. Next time we run, you can still use the broken locator. We know exactly how that element looks. And we'll for sure find it enough that if it changes again every time we find it, we keep learning everything about it. So basically we can postpone the point of you having to fix a broken test indefinitely. Okay, that's the idea. Now, you can keep using your broken locator at this point in time, it is a locator that is simply a string, and an Applitools locator for that matter, that is used to find your elements. And it doesn't need to have anything to do with how the UI is implemented anymore. If you prefer, we can go ahead and use the information in our dashboard and copy the new Selector that we proposed and paste it in your code and fix it that way if you want. And then it can go back to relying on your locators and actually find the elements that way. But you don't have to do it.
[00:20:20] Joe Colantonio All right. So someone listening to this may say, well, that sounds like it has some sort of overhead on it at times, but you said it's faster when it runs on the cloud. So how do you compensate it? It seems like it's doing a lot. There has to be some sort of layer that's doing some extra seconds adding to the execution.
[00:20:37] Adam Carmi Yeah, it adds a few milliseconds for finding an element in case, self-healing needs to kick in and find it. But I mean, I don't want to say specific vendors name specific vendors, but I did a webinar a week ago and I showed live how it actually runs twice as fast as on a leading lab provider without naming names, twice as fast. I mean, so on one hand, this other vendor does not have the self-healing layer and it's still twice as fast. Twice as fast, we are talking about saving many, many seconds of failed tests, right? It could be 10, 20, or 50 seconds, a minute, depending on how long the test is. So adding a second of overhead and reducing half a minute of overhead is definitely worth. But in general, if you take the amount of time that say, in rerunning the tests again and again and again, going into the test code and fixing it and spending the time looking at the logs and investigating the problems. So for sure, we save a huge amount of time, not just execution time, but work time, productivity time, and mental health time.
[00:21:52] Joe Colantonio So you beat me to the follow-up because that's exactly a lot of times that people take of time, they just think of execution. But most of their time is really spent on maintenance, like, oh my gosh, have 900 tests. And it could just beat an element changing and it takes you hours and the next build is out and running. This will save you from all that.
[00:22:09] Adam Carmi Of course. Yeah.
[00:22:11] Joe Colantonio It sounds like what I love about this as well is just like you've always done it. Someone will always say, Oh, well, how do I know it's not hiding a real issue? Well, it's not trying to hide anything. It will tell you we did this. Here's what it was. And here it is now, too, so people can dive in more.
[00:22:28] Adam Carmi Yeah, that's for sure. But also a lot of people I mean, don't look at things. I mean, okay, so what was the issue and implementation detail in your page change? That's not an issue. If it was important for you that a certain element would have a certain attribute, and that attribute shouldn't change. You should have asserted on that, that would be the correct way to do it. You would like to see the test fail to tell you, I expect that this attribute and it was that value, not what I expected, it should be an assertion, it shouldn't be, Oh my God! No such element exception, something crashed, and figuring out, you know what happened. It shouldn't fail, if something should be validated. If your test is trying to prove that something is as expected, then there should be an assertion there. Interaction code is not an assertion. Okay? If you try to click something and the page implementation change, you still want to click it. This is what is supposed to happen. So you can get to the validation part of the test and not lose coverage.
[00:23:36] Joe Colantonio All right. So some other questions that are popping. I know people are probably thinking, oh, this sounds great, but how hard is it to implement? And two, will I be locked into the solution forever and ever if I start using it? So the first one is how hard is it to get started using it?
[00:23:50] Adam Carmi So it's not hard at all. You just need to take your existing tests. If you have, for example, Selenium and you are using a WebDriver instance, a remote WebDriver, to run it either locally or in-house or in a third-party provider, you just need to use the same remote WebDriver and point that in Applitools infrastructure, which is super easy. Just a small change in your test framework and you don't even need to change any lines of code in your test. So it's very, very, very easy to get started with. And what was the other? The second question? Oh, locked in so and again, the beautiful I think the most exciting thing about the self-healing execution for our cloud is that self-healing, regardless if the level of self-healing that you provide is slightly better or much better or slightly inferior to what other vendors are providing. I think that the main thing here is self-healing, which is not a new thing, we didn't invent the concept. It has been around for a few years, but up until today, vendors used it as a premium feature to lure customers into their closed platform. And this was a way to lure you in and get you to make a decision to throw away all your existing investment in test automation that maybe you worked years to build and start building everything from scratch in their system, which means that you lost all the work that you've done and your existing investment. It means that you lost your freedom to choose what framework you want to work with. Now you're tied to that specific platform and you have to use the APIs and the tools that are provided to you by that vendor. And by third, you're pretty much locked in because if you dislike this framework, there's a very big problem to leave it because you have to throw away everything and start from scratch again, which a lot of teams are busy succeeding, helping their companies succeed and don't have time to wipe out everything and start building everything from scratch. The beauty about the self-healing execution cloud is that there is no vendor locked in. You have no existing tests, you own them, and you choose whatever frameworks you want to use. All you need to do is just run the same tests on our infrastructure. You get self-healing without changing a lot of code. If you don't want to run with us anymore, simply, just run elsewhere, it's as simple as that. So there is no vendor locked in in this case and you can still be free to do whatever you want. So this is, I think, the biggest, I think, innovation here and the biggest impact that this has. This is the world's first self-healing infrastructure for open-source frameworks. Okay? And this is the main highlight.
[00:26:45] Joe Colantonio Love it. I'm just looking at my notes from your webinar and I'll have a link to it in the show notes for people that actually want to see this in action with the coding examples you did, but you went over why you can use your existing open source tests really easily. It's lightning fast. You mentioned affordability, but I don't know if you mentioned a price or if you can mention a price, but like is there a percentage of how much more affordable it is than another solution? Or what do you mean by affordable?
[00:27:09] Adam Carmi So yeah. So, for example, one of the leading vendors, of course, I can mention Price is one of the leading vendors is offering concurrency in its cloud for $1,500. That's the least price. The execution cloud costs $800 per concurrency. Okay. So that's the difference. It's very, very, very affordable, which means that you can really save a lot of money running on our infrastructure in addition to getting a superior product.
[00:27:42] Joe Colantonio Nice. Also, I have in my notes, it's a single platform for authoring and executing tests. What's the authoring piece here?
[00:27:48] Adam Carmi So authoring is using Eyes, basically, you can use Eyes to author your test through the maintenance of your test, visual and functional coverage with all the benefits that I mentioned in the beginning. And you can also now execute them on our infrastructure so you can get the whole thing without a single platform.
[00:28:10] Joe Colantonio So I always like to think about how someone listening can actually implement or where makes sense because I assume it doesn't always apply to everywhere. So the self-healing, where do you think if someone's listening, if they have this situation, then self-healing will help them as opposed to it won't help them if they have this other situation? I don't know if that makes sense, but.
[00:28:28] Adam Carmi Yeah, yes it does. So basically, it's helpful when the UI changes in a way that breaks through locators. If you have extremely disciplined teams where the developers are very, very conscious of testing and each element has its own stable ID and they do code reviews and never make mistakes or change them or forget to add them. Then your tests are probably super stable and everyone's happy, so the impact won't be that big. But for the 100% of teams where that doesn't happen, then it will help you stabilize your test suite, prevent failures, reduce the amount of time you spend in reviewing test results, and fixing your test following UI changes. Of course, the more your pages change frequently and more often, the more value it will give you. Okay. Now, it is also helpful in some other cases. For example, in some situations, teams have people test automation engineers that are less experienced and are not so good at choosing good locators. So in this case, what happens is they end up choosing a bad one which causes the test to fail much more often, or to fail sooner than it could have failed. So self-healing locators resolve that problem because all it needs is for that bad locator to be able to find the element once. Once that happened, you have the power of self-healing to find that element for you with all the data and then the algorithms that I mentioned before. Another case where it's very useful is if your application has simply weak locators, which is the opposite of what I started with. In many cases, you have a development team that is just building the product and doesn't care about testing. There's another team that needs to deal with testing. No one thought about test stability. There are no strong locators and the team needs to deal with what it has. And it's very difficult to choose one locator that is good because there isn't one. So self-healing is very, very useful in that case as well. And the last example is dealing with applications that have a dynamically generated UI or Dynamic IDs. I mean, everyone who tried to automate a Salesforce, for example, with its Dynamic IDs with Selenium, knows that it's an impossible task. But with self-healing, it is possible and doable because it's enough to have a single run that is able to automate the one from that moment on. Self-Healing takes care of it, even if the UI is generated and elements have dynamic attributes. These are the use cases.
[00:31:11] Joe Colantonio Great point about Salesforce for sure, I know a lot of people struggle with that. Just random thought popped into my head as you were mentioning, that the first use case of maybe an inexperienced tester choosing a poor selector, can this almost be used as a training tool then and over time to like I use this selector, but it's saying the selector would have been better. It's almost like training your engineer is also over time, maybe. I don't know.
[00:31:32] Adam Carmi Yeah. I mean, perhaps. I mean, we didn't build it as a training tool, but actually super easy to filter your test suite runs and the tests within them to find all the occurrences where self-healing kicked in. And you'll see there exactly the failed locator that you tried to use and a recommended strong locator for that element. Okay. So which we actually offer as a way for you to just have an idea of how to update your test. But people can learn from it. I mean, if they see that there are failures, then it can definitely lead them in the right direction if they see it often enough.
[00:32:12] Joe Colantonio One other thought. I want to squeeze in here before we go. We talked a lot about Selenium and WebDriverIO. I know based on the webinar, this answered the question, but the people have missed the webinar. I know Playwright has been really, really growing in popularity, and Cypress as was growing on and off. It's still going. But anyway, are you going to be supporting these other technologies? Are you going to stick just to the Selenium ecosystem?
[00:32:34] Adam Carmi For sure. We are going to support it. We decided to actually you know that it's funny. First of all, Selenium is like a very, very common framework that although is not the latest and greatest, it is still an excellent framework and many, many, many teams are using it. And there are plenty of legacy projects that use rely on it and we continue to rely on it for many more years. However, we actually you know this we working on an autonomous product, which is a fully autonomous test framework that is going to come out later this year. And we actually built this infrastructure. So that's been some talk again, I guess.
[00:33:17] Joe Colantonio Have to get your back. Yeah, okay.
[00:33:19] Adam Carmi Yeah. But we built this infrastructure to help us run the autonomous test. We've already been using it internally for two years now, the execution cloud. And then we decided that it's time to maybe turn it into our product of its own. But the reason it is based on Selenium wasn't because we started and said, Hey, let's choose the new framework invented for it. We just use the infrastructure that we have in place and how we are using it internally. But so this is why it's today with Selenium and Selenium-based frameworks. However, we are already working on adding support for Cypress and Playwright. And very soon it will be available as well. And we have a long list of a long backlog of additional frameworks including Appium support, etc. So we will come out in due time. But of course, we are super excited about this offering and we get huge traction now, it is very well received and this only encourages us to invest more on this infrastructure.
[00:34:23] Joe Colantonio Awesome. Okay, Adam, before we go, is there one piece of actual advice you can give to someone to help them with their automation testing efforts? And what's the best way that people can get their hands on the self-healing cloud?
[00:34:35] Adam Carmi The best advice for automation? What I would say is that not necessarily from the practitioner's point of view, but from the value from automation point of view, I would encourage you to do two things. First of all, make sure you run as frequently as possible and as fast as possible and get as much coverage as possible. Don't run just once before a release. Don't run it nightly, run it every day. This is what the best-in-class teams do. You need to get to a point where the build takes no longer than 15 minutes, which is just about the time that a developer would be willing to wait before moving on to his next task. If you can get the coverage, and the feedback to a developer on the code changes that he just pushed before he's moving on to the next task, you will actually make a difference. It will actually fix the code that you just submitted before starting something new. It will save hours in switching between the new feature through the old feature and back to the new feature, you will actually have commits with less bugs and you will improve the velocity of the entire team ending eventually ending up with the fact that you have more features for your customers and help their business succeed. This is where you should start. This is one of the best-in-class teams do. And in order to get to this combination of very fast execution and a lot of coverage and still be able to maintain all of that, you should rely on the modern tools that are out there, tools that rely on AI to take out a lot of the code could take out a lot of the maintenance overhead to drastically increase the coverage of the tests that will allow every member of the team to maintain the automated tests just like what Applitools products do. Okay, so don't try to save money on open source. You can still use open-source as these tools to boost them but don't try to save that money. This will actually save you a lot of money and save your business a lot of money. So this is my advice. Now, in order to get started with the execution cloud, basically all you need to do is go to our website and find the entry on the banner on execution cloud, ask for access, and someone in working up an account for you. And you can just start turning from you can do a free trial and try it out. So it's super easy.
[00:37:12] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a450. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:37:46] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.