Debunk Autonomous Software Testing Myths with Tobias Müller

By Test Guild
  • Share:
Join the Guild for FREE
Tobias Müller TestGuild Automation Feature

About This Episode:

In today's episode, we're thrilled to have Tobias Müller, a software development veteran and an expert in autonomous software testing. With over a decade of experience in the field, Tobias brings a wealth of knowledge, especially from his work with TestResults IO, where he delves into the AI-driven aspects of testing.

Join us as Tobias debunks common myths surrounding autonomous software testing. He clarifies the often misunderstood differences between automated and autonomous testing and expresses a healthy skepticism about the current capabilities of fully autonomous tools. Despite the challenges, Tobias sees a promising future where autonomous testing could significantly ease the testing process by harnessing vast data pools and sophisticated learning models.

We delve into the dynamic nature of the testing landscape. We advocate for a synergy between human proficiency and AI to enhance testing efficiency without completely replacing the human element. We delve into the potential and limitations of AI in testing, underlining the crucial role of human oversight in decision-making processes.

Click here to be one of the first to discover how to automate non-automatable applications!

About Tobias Müller

Tobias Müller

With over 25 years of experience in software development and over 11 years in software testing in regulated markets like MedTech/Life Science and FinTech. Backed by tons of real-life experience and a visionary mindset of how automated software testing should work nowadays, he and his team built the autonomous software testing platform TestResults.io to increase software quality standards worldwide. International speaker and AI trainer at the national training week in Malaysia.

Connect with Tobias Müller

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:01:53] Hey, it's Joe, and welcome to another episode of The Test Guild Automation Podcast. Today, we'll be talking with Tobias, all about Debunking Autonomous Software Testing Myths. I talked to a lot of people, and there's all kinds of things floating around. And Tobias is an expert. He's really going to shed some light on a lot of these things that you may not really understand correctly or maybe we're misunderstanding. So really excited to have him back on the show. If you don't know, Tobias has over 25 years of experience in software development and over 11 years in software testing and regulated markets. So he knows all the hard environments like medical and fintech, and it's backed by tons of real life experience. And he has a visionary mindset of how automated software testing should work nowadays, because he's seen how work before, and his part he has his hand in the pulse of AI and all the things that we're going to nowadays. He and his team has built a really cool an autonomous software testing platform called TestResultsIO, which really helps you increase software quality standards worldwide, uses a bunch of customers, a bunch of clients, works beautifully. So really excited to get his thoughts on this as well. He's also an international speaker, he actually spoken at Automation Guild a few years ago, and an AI trainer at the National Training Week in Malaysia. So he also knows AI, it's not just a buzzword for him. Really excited to have you back on the show. You don't want to miss it. Check it out.

[00:03:12] Joe Colantonio Hey, before we get to today's awesome interview, be sure to check out TestResults's new product user one that Tobias mentions later on in this episode. It's really cool because it aims at helping you automate non automatable applications and also enhance your end-to-end testing with automated software testing from the most important part from your user's perspective. So sign up for free now using the link down below and tell them the Test Guild section. Hey, Tobias. Welcome to The Guild.

[00:03:45] Tobias Müller Hi, Joe. Thanks for being back.

[00:03:47] Joe Colantonio Yeah. Great to have you finally back. It's been a while, so, a lot of things changed in the industry. I think last time we spoke was maybe might have been almost two years ago. What's the big buzz now going on with AI?

[00:03:58] Tobias Müller And I think it has moved on from AI to autonomous testing, isn't it? So like two years ago it was all about AI and how we integrate AI into product. Meanwhile, It's all autonomous testing.

[00:04:07] Joe Colantonio Absolutely. So that's a good point. I've been hearing a lot of like a lot of things. I'm not sure exactly people know what autonomous testing is. I think they just equate it with just the way they understand automated testing. What's the difference between like automated testing and autonomous testing nowadays?

[00:04:22] Tobias Müller Yeah. And Joe, that is a good question actually. What is autonomous testing? I mean, in the end, I dissected all of the different tools out there that said, like, hey, we do autonomous testing and most of the time you find something like you're already had in the 80s is like a web crawler that is taking screenshots from all of the pages, and it's comparing the screenshots every single day and highlights if there's a change on the page. And in a sense, you can see that it's actually autonomous because yeah, it just does it on its own and it finds problems and it can tell you there is a problem, you need to fix that. But that is nothing that I would actually see as an autonomous testing. It's more just like a static crawling of the pages. What else I've seen in the meantime is like things, that is most of the autonomous tools are wrappers around Selenium, Appium, or Playwright would actually take some HTML code, push that to a GPT 4 model or any different kind of LLM like Claude, actually get back to fields and try to identify some locators of that and then generate some code for Playwright that you can run that. That's also not really autonomous. Because in the end, autonomous. What does it actually mean? What is autonomous testing? Because if you think about it, if you would type autonomous testing, that is what I always say, that's also what I say in the training week. It's actually like if you would have autonomous testing, real autonomous testing, then we wouldn't need to have testing at all. Because if I just have a system and tell her, hey, find me all the bugs in that system, I cannot tell the system in the beginning behave like the system. If there is autonomous, if there's full autonomous testing, we don't need testing at all. So most of the autonomous part is actually marketing and I'm guilty for that myself because we also claim that our system is autonomous. I guess you might remember from last time we also use the term artificial intelligence a bit different. We use AI as an abbreviation, but we use that for augmented intelligence. And a lot of competitors also jumped on that wagon and said we do have copilot. I mean that is the newest trend right now right? It's like Microsoft started with Microsoft Copilot. And now everybody has a copilot. And the best part on those is actually there's even one window that has a copilot that is actually explaining the test cases that you're automated with a tool. Apparently, the test of automation is so complex that you need to have a copilot on top to explain the test case afterwards, after it's automated. And that is where we are right now. And that is actually the problem that people think like autonomous testing, but in reality, so automated testing is a story about automated testing really too. What is the benefit of automated testing these days? Or is it like, hey, automated testing didn't work like we always promised, and now we jump over to AI-based automated testing and that doesn't work as well? So now we have autonomous testing. And then, that's one of the biggest problems like in the whole market. It's like all of this marketing blah blah. And these buzzwords that you are using and everything is autonomous these days and none of that is really autonomous. There's like a new play on the market as well. This thing, like we do have autonomous testing. And the only thing that I always do with all of those tools is like, I go to the tool because most of them can only test web pages. That's where it starts. So you can only test web pages. And that's really autonomous because sometimes you need to go to the SAP system in the back-end or stuff like that, or you need to check your mobile application and what they actually do is like they had a fully autonomous testing, 100% autonomous testing. I give those tools chance and just enter the URL of the tool in the prompt until they test the system. And it's actually you. So you should know what you are if you are autonomous to test yourself. And all of them, actually all of them that I just had failed on the login already. So they're not able to even log in if it's a two factor authentication. And I mean, that is a very autonomous is right now.

[00:07:59] Joe Colantonio Right. Let's break that down a little bit more. So do you believe in autonomous testing or do you think it's just a buzzword and it's just automated tested assisted by AI? How do you define it?

[00:08:09] Tobias Müller I do believe in autonomous testing, but I think we are not there yet. It's like Elon Musk proclaimed autonomous driving like will be available in 2012, right?

[00:08:17] Joe Colantonio Yeah.

[00:08:17] Tobias Müller Now, it's 2024 and we are still don't have autonomous driving. So I believe in autonomous testing, but not like in the sense of you give the system an application and they will just test it and report all of the bugs. So what I do understand in the term autonomous testing is like it just makes testing a lot easier in the sense of there is a new set of data. What do you use as an LLM for example, that is actually trained on the different use cases of different kind of applications? So what I have in mind, if I talk about autonomous testing, it's more or less like you give the testing platform a system to test to tell you that this is a core banking application, and I want to test you the financial transaction part. And then the system based on all of the training that you got on core banking systems and financial transactions, you can actually test up to 80% really autonomously and can give you an indication like, okay, so the last 15 times I executed the following test scenario is for core banking applications for that specific pass. It resulted in this and for your application you're testing right now the results are completely different. It kind of makes sense out of that. So you need to look into that. And I think that's where autonomous testing is going.

[00:09:26] Joe Colantonio Gotcha. So what's the role of an automation engineer or software tester when it comes to either autonomous testing once we get there or even now, did you believe in being replaced or do you still see it as needing a human being in the middle of all this?

[00:09:42] Tobias Müller There's always a human being, right? I mean, did anybody in the history of mankind ever get really replaced like everybody is afraid right now like my lawyer is afraid that he gets replaced by AI. And the company that actually support our funding is also afraid that they get replaced by AI because it's all document work, but nobody ever got replaced. I mean, they just extended their skills. So right now you're doing something and now there's a next evolution in technology. Now you're doing what you did before, and you can now do it in a fraction of the time that it required beforehand. And now you can build up new skills and you can be more efficient at the time. I mean, that is actually how the whole economy works. Like how much time can I provide within one hour? And that just needs to increase because otherwise the economy has problems.

[00:10:28] Joe Colantonio It's a good point about lawyers. I actually had a loan contract another company gave me. I said, let me just put it through AI and have a read it. And it brought up five points. I'm like, all right, those are valid five points, but I'm not going to trust it like an actual lawyer. It just help me have a better conversation with the lawyer about potential pain points or issues with the contract. So it didn't replace them. It just helped facilitate better lawyering, I guess a better.

[00:10:52] Tobias Müller Exactly. It made you the better customer in the end, right? Because you were able to ask better question.

[00:10:58] Speaker 2 Yes.

[00:10:59] Tobias Müller That is what AI right now, it's augmented, it's like it helps you to understand what is there to ask better questions. So you are more efficient because you have more precise questions. The answers from the lawyer are a lot better in the beginning already, so you don't have to have one hour long discussion before you understand the topic, because you already got those insights and you are more efficient at the time.

[00:11:18] Joe Colantonio Yeah, exactly. Because I just looked at I'm like, I don't know. But then once you have the AI have some talking points, it really did facilitate a better and quicker. I saved some money, probably because it spent less hours of her having to understand what the heck I was looking for.

[00:11:30] Tobias Müller The same for us actually like all of those financial documents that we went through. It's like over 200 pages. And in the end, I mean, they are more or less similar, right? And they always have two similar points. And without any experience, they would have taken me days to go through those documents and to even understand them actually. And then finding hiccups in those documents even takes longer. And this is what AI can do these days, because, yeah, there is already a lot of data on that, but it doesn't replace the intellectual work. The intellectual work, it's just more efficient based on those tools.

[00:12:03] Joe Colantonio Absolutely, I've been speak with a few people and they say that how a lot of times AI systems bubble up insights, and then a person needs to look at it and determine whether or not it's a real issue. They think that's going to go away, and it's you're going to have to start trusting the AI to make the right decisions. Do you see that as the right model or is up where you see things going?

[00:12:22] Tobias Müller I don't know. I'm the kind of person that doesn't even trust my coworker. So I guess it depends a bit on character, right? How much are you willing to trust anyone in the first place? And the second one is like, how much you are trusting artificial intelligence? I mean, that's the point. It's like, go back like 12, 15 years and everybody starts claiming, yeah, we have test automation and there will be 100% automation on that. And you just get the results out of all of those tests and you just need to trust it. And where are we right now? We still have flaky test cases as I mean, as sad as it sounds, but we are still in the beginning of all of that. And now I already a story about like, yeah, we just need to trust the artificial intelligence to give us the right results. And that we've already seen that that might actually be a problem. Again, I'm not on the side that says like, don't use artificial intelligence. It's just like, use artificial intelligence in the way that it helps you. And get the results fast, but control the results. That's my current stance on that. And I think that's my stance for the next 10 years as well. Even if you think about the progress that you made in the last two years, I mean, if you remember, OpenAI was started like seven years ago.

[00:13:31] Joe Colantonio Yeah.

[00:13:32] Tobias Müller And where it went in those seven years is tremendous if you see GPT one in comparison to where we are right now, it is in seven years, that is tremendous. And the thing is, nevertheless, I would stand my stance that in the next 10 years, there won't be the major breakthrough where you can just trust the result of an artificial intelligence and just keep that in mind, because a lot of people believe in Elon Musk and say like, hey, autonomous driving and that will be ready by 2012 and did not. It was not in 2013. It was in 2014. It's still not autonomous. And I think it's claiming that because but people need to understand driving is a extremely simple problem. If you think about that the human needs about an average 20 hours of dedicated training on how to drive the car. And how long do you need training for an ESP system? It's for sure more than two and a half days, because before you can actually use the SAP system or call banking application or even Microsoft Word, I mean, how long does it take to master all of those features? And that's the thing. So we are not able, apparently, to train an artificial intelligence or multiple artificial intelligence in an agent based model that is able to solve a problem that we need training for like about 20 hours. And we already claim that we should trust a system that can do much, much, much more. In fact, some more than that simple system of keeping a car on a lane and making sure that there's no crash.

[00:15:00] Joe Colantonio Right.

[00:15:01] Tobias Müller Which is difficult. But for human, I rather a simple task.

[00:15:06] Joe Colantonio What do we do as testers and developers? We're being told all this. All managers are probably hearing it saying, well, just use autonomous testing. You don't need to do anything else. I've asked a few people, is AI in testing overhyped, properly hyped or under hyped and like how do we handle how do we know what it is?

[00:15:24] Tobias Müller Yeah, right now it's overhyped. That's a given. I mean, right now it's overhyped and it's still in the hype cycle. I think it's not yet settled down because we still don't know exactly what it actually is, because what you hear is like a test automation only the only automated execution. Now we have Gen AI and we can automate the rest of the pipelines, but you can generate test cases, you can actually classify results. You can even generate back reports that are readable, to develop and can implement them. Oh even there we can even extend that back end. We fix automatically with GitHub Copilot and stuff like that. Right now AI is everywhere. And we already know that if something is everywhere, most of the time it doesn't work out. There are specifics where it will help. Back to your question actually about the manager. And that is why we call our platform the autonomous testing platform because that is actually what managers want to hear. This is marketing for. To get the food and then to show the solution. But what I think what is really the future is more or less like a system where you do exploratory testing and while you do your exploratory testing, you are actually training the test system in the background so that you can create a model, like an abstracted model of your application, where you can then use prompting in a more sophisticated way today, like, say, the prompting actually to set up all of your use cases. And that is where I see then the first step in autonomous testing. And the next is that all of those use cases that are prompted all over the world are combined more or less in autonomous system.

[00:16:52] Joe Colantonio Nice. So I just had a random thought. How we've heard about telemetry being implemented into systems where when a bug occurs, you could kind of trace it exactly to what's causing it. So if I had an automated test that's running, say in production, and it finds a bug, then not only can it alert someone that says there's a bug, but if they had the telemetry to know what caused it, they can go ahead and fix it as well. Do you see us getting there? Is that crazy thinking, or would you trust us?

[00:17:17] Tobias Müller No, no, that's actually what I hear as well. It's like one of our investors is actually in this open telemetry area and he is a brilliant mind. The think on that is also, I think it's also overhyped right now to be honest because there's a lot of problems. You get a lot of data and a lot of people have a lot of silos with a lot of data tracing data in there, but they kind of make sense out of that. We are still in that phase on making sense out of the telemetry data that we receive. And then actually getting any benefit out of that data. But you are right. If you do have test cases that can run in production and you can actually monitor them via our telemetry data. So then you can easily identify that test case is actually trigger the bug, the bug is caused by this problem. I think that is also part of the future. That is a given. And that is closer actually because all of the telemetry infrastructure is already there. Microsoft is gathering telemetry data, I think, for last 35 years or so. We know how to do that. The majority doesn't yet know how to get the semantics out of all of that data. But we do know how to solve the problem. And we also know how to run test case in production. And you also know how to isolate production. So you can take the production system and isolated actually a secondary production system. And that is something that is based on the new technologies that are available to us right now. And I think that is coming in the next 3 to 5 years, that we really use telemetry data for us to capture what the test case did and what bugs were actually produced by the test case, and where they need to be fixed in the code, but also to generate new test cases. Actually, if telemetry data is not based on test case is being executed from real life customers that you actually know, like, hey, okay, we rolled out that release and apparently 10% of our consumers triggered that bug. And that's actually a change to trigger the bug based on the telemetry data we received. So we generate a test case out of that. And that's the point. Then actually, do you still need autonomous testing in a sense? Because you have your customers. They are running it through the product and are defining exactly the way that needs to be tested? And I think that is a lot closer than fully autonomous testing. I think that is happening in the next 2 to 3 years. That's really productive. So it's already available today. You can do that. You can set up your on your own and it works if you do everything right. But I think that's something that you can use out of the box in the next 2 to 3 years.

[00:19:27] Joe Colantonio Yeah, I love that. That's going to be awesome.

[00:19:29] Tobias Müller Yeah, that will actually be. The thing is, nobody talks about that, right? Everybody is talking about artificial intelligence and autonomous testing and stuff like that. But that will be the real game changer. Getting that telemetry data that is already out there in a connection to actually what happened and in connection to the test case, and then actually being able to fix the bug because you know exactly where it originated from and that is the major game changer that's coming.

[00:19:53] Joe Colantonio You heard it here, we're going to replace SREs and developers as well. Done. Right. No, no it has to go right.

[00:19:58] Tobias Müller That's the point here that everyone right here replace everyone. And that's by the way, that's the joke of the history, right? When I was still in college, there were open positions at Microsoft that were called like SDET, like software development test. Back in Germany then. I always thought, okay, that is a software developer that's still being tested somehow, or they still need to prove that they are able to do the stuff. If I understood that, no, no, no, that is the software developer that is actually writing automated test cases. And then the term disappeared, right. Because test automation or testing, automated testing was democratized and everybody could do it. The business could test themselves and stuff like that. I thin SDET is coming back as a role. So people and companies are looking for software developers in test again. So that is actually interesting development. If you think about that, everything is getting more artificial intelligence. It's getting a lot easier. It's getting democratized. Nevertheless, the old roles are coming back. That should give us a hit.

[00:20:52] Joe Colantonio Absolutely. So let's talk a little bit more, we talk about autonomous testing, AI, a lot of companies are coming out. You do work with TestResultsio, you're one of the co-founders. So how is your approach different? Like, you're pretty a straight up guy. Every time I speak to you, you don't give me vendor pitch. How is test results io different? Because you were built differently and I think, it's a unique way. So maybe people who missed the last episode, what is TestResultsio and how do you approach, say autonomous testing?

[00:21:20] Tobias Müller Yeah. So right now, TestResultsio was like, it's basically a completely different approach to identify elements, right? Because right now, what people often use or always use like to try to identify with which input do I need to intact? Where do I need to put my text, where do I need to select a value and they need to find try to identify a locator for that. That is wrong in every case. That is why TestResults just differently. And what we use because there is already a locator, there is something that you as a user tells you that just the right fit. If you see a field like a name, you know exactly that. And the next field should be my name and it's not the nearest field, by the way. It's it's a logically next field which is dependent on the culture that you were growing up with, stuff like that. And that is actually what you put into TestResultsio so it works completely different to any other system in the way of how it locates where it should interact. That's the first part. And then we do have the autonomous part where we are only partially out. So it's not fully autonomous testing. What we did is like? We trained the system to understand how like scrolling works. So if it doesn't find an element on the page or in the list, for example, where it can identify that there is a way to scroll. That there are myriad of ways to scroll, actually list, for example, that you cannot automate that by hand. We train the system on what the scrolling actually mean and how does that work? And what is the specifically on scrolling which can be at the top, you can be at the bottom already? You are not able to scroll at all, and they just want to be trained to system. And that is where we do have autonomous functionality in the system that you can just tell it like, hey, I want to go to that, I want to click that element, or I want to go to that page or something that in the system and show us that it could find that element if it's available. And that's a difference that we also do have a different approach to Gen AI, because what you see in Gen AI is always the typical approach. Is this like, hey, I do have my requirements, or I do have screenshots from my application generate test cases for that. And I also did test with somebody on LinkedIn and said, like, hey, I want to have test cases for WinWord. And then you always get those generic test cases likely the performance should be good. You should be able to log in. You should be able to not log in with your own credentials. It shouldn't allow you to log in without providing a username. So the typical test case is that everybody who is involved in testing can write down all of that. There's no real benefit. You're generating thousands of test cases without much benefit. What is the difference is like if you're generating this model of your application and then you're prompting use cases like for use ...... platform that that's my primary example. It's like you tell testers, I say, I want to buy a car. I want to buy an Aston Martin, for example, for $6,000. And that is what you put in the prompt. And based on the model that you generate in the background, it will actually generate the interactive steps. So you don't need to understand how your application works. You just give it a use case, and the system is already smart enough in most cases to identify what that sequence, and that is how you build up your testing. You test scenarios. That's how you build you user journeys because you don't talk about test cases anymore. We really think like, okay, what does the user want to achieve? And what are the steps to achieve that actually? And that is an auto generated step by step and that is a way of the context. So if you tell the system, hey, I would just like a test. It's a system for German parcel service and put in the information. I want to send the package to Japan and generates all of the steps. And I just said like, yeah. And by the way, it has let's say four kilograms, which is like how many pounds? 8 pounds? And it just added that at the right position in the chain of interactions. That is what we are doing different. So the claim is not okay, give us the requirements and we generate all of the test cases for you. Then we democratize the test generation. It's really like no, we do augmented stuff like give us a use case and we generate the steps for you because that's the nasty part, right? Nobody wants to drag and drop those elements and select the right inputs. I mean that's what it's exactly like for the execution. I mean, the intellectual part is coming up with a test scenario and with the use cases that you can actually change the system with, but not executing a test case or dragging and dropping those elements, keeping the data. And that's the difference in the TestResultsio.

[00:25:28] Joe Colantonio Also what I've seen, as you know, when you work in an enterprise company, you hop different tech stacks almost. And sometimes you need, I need a test for a web browser and I need a test to test the backend. I need a test to test the main because I think using an image based approach, it seems like I think you're able to get over those type of journeys almost. Am I understand that correctly?

[00:25:47] Tobias Müller Yeah, completely. And that's the point. I always have to laugh if I see like a recreated end-to-end testing, and then it's based on Playwright or Selenium or stuff like that, where you see like, okay, end-to-end is for your web pages. And that's the point. The enterprise, you typically have a huge tech stack. You have the web pages, you do have rich clients. You have fat clients too. You have some backend systems, you have database systems and stuff like that. And then if you do have a visual-based approach, then you can overcome all of those limitations because the technology doesn't matter anymore. It's even better. I mean, also for web pages, because HTML uses canvas, for example. So you can draw something on the HTML page and you can technically not test it. All of those tools do have fallbacks that they can compare an image, but if you compare pixel perfect images, it just doesn't work. It's just not reliable. And that's a problem because even Windows supported text stack like if you say, I do have a solution that can test web pages, there are still the corners which cannot test, or the next one is Google. We used to flutter like that three years, five years ago, I don't know. And the first rendering for web was pixel perfect. So it actually used canvas to render the whole interface. And all of those typical testing tools were not able to test it. Now they do have adapters at different renderers for flutter, it all settle down. But if you do have a universal approach and the universal approach is more or less like abstracted human-like, if you took the universal approach, then those limitations will never be there, even in the future. I could now already proclaimed that any new user interface framework that you might come up with, or anybody else at the big companies come up with, we can all be supported today.

[00:27:22] Joe Colantonio I love it. I have in my notes some for some reason. User one technology independent. What does that phrase mean?

[00:27:28] Tobias Müller That is something that we are working on right now. This is what I mentioned as the technical solution that I see as the future is where you actually have access to a system where you want to test all of the different applications that you want to test, and you just click through the applications and the system is learning in the background actually how the application behaves in those different scenarios. And based on that, it is generating an abstract model. And based on that model, you can afterwards then prompt all of the interactions that you want to have automated that that supports you tremendously in your day to day work. And that is actually what we call the user one. Because there's typically the open telemetry or the telemetry area. There's also all the do you get the telemetry data for development. It's more or less that the first dashboard that you ever want is the one based on telemetry data. And the first thing is that you are ever wanted is more or less like your test them. And that is why we call that the user one technology that we are coming up with, like in the next I guess in the next 6 to 9 months that will be publicly available. You can already register today for being a design customer for that. And that will exactly be like also like the I don't like the term game changer anymore, because GPT uses that a lot in the text to generate.

[00:28:40] Joe Colantonio Yes. Yep. Game changer. In the realm of automation testing.

[00:28:44] Tobias Müller Exactly. I mean, everything is a game changer these days. And that is just a next evolution because it combines the exploratory testing of a human with intellect, with the combination of being able to autonomously define test cases based on the model that you as a human generated AI, based on the use cases you defined, and based on that, you actually set the boundaries to allow the system in the state that we are right now to understand this, the output that I get right or is it wrong? And that is the next evolution in testing for me, it's more or less that you don't do test automation in a typical way, but you just test the application like an explanatory tester, and afterwards you can just prompt your test cases pretty simple and get precise results from that. It's not about trust artificial intelligence anymore. It's about you set the boundaries and you need to trust what you said. And that is completely different approach once again to how to use all of the technology that are available today.

[00:29:41] Joe Colantonio Yeah, it seems like the perfect combination of keeping the tester in the loop that using their expertise in the application and there are assumptions and having AI just yeah.

[00:29:50] Tobias Müller Exactly. That's the point. I mean, everybody who wrote in testing is like I still remember the story is like the best guy who wrote who ever wrote a specification, was one of the testers in my team because they actually understand the system and they understand the user and they also think extremely precise. They come up precise requirements that are actually implementable and that are testable. And they just you need somebody like that in the loop. It's always like this, always in the robotics test. Meanwhile the term like human in the loop. There's like a human that controls the robot that the robot needs to be sure that it doesn't break the human somehow. So it's like those human in the loop and robots. And I think that is also required for testing. It's like there will be more and more robotic stuff like automated execution and stuff like that in automated design, but there's always a human in the loop. And test automation beforehand is like, I don't see fully autonomous testing for the next ten years.

[00:30:44] Joe Colantonio Wow. Very nice. So what's the name of the new product called? Does it.

[00:30:48] Tobias Müller That will actually be Test Result user one.

[00:30:50] Joe Colantonio User one. Okay cool.

[00:30:51] Tobias Müller Yeah it is really user one because that is the first user that you want to have your system access.

[00:30:57] Joe Colantonio Nice. Is there anything else you're working on? I also have written down some called TestResultio instant. I don't know where I got that from, but.

[00:31:04] Tobias Müller Yeah, that was just a technical term for I actually bring a user one to everybody because we did some. Yeah. As you mentioned in the intro, it's like TestResults, it was spreading around the world. And you notice that the current delivery process is difficult in different areas of the world because it just works different. And the testers, although it's not the smallest, like the technology vehicle that we use to deliver the test results, the user one experience to everyone. So if you use a mac today, even if you use the most bizarre, unique setup that you can ever come up with, you will be able to use user one.

[00:31:37] Joe Colantonio Very cool. Okay, Tobias, before we go, is there one piece of actionable advice you can give to someone to help them with their autonomous testing efforts? And what's the best way to find contact you or get a hands on with sounds like a user one from TestResults.io?

[00:31:51] Tobias Müller For user one, I guess you can easily register as a TestResults.io. You can register to be a design customer. We are always looking for feedback, so if you want to get hands on on stuff that is ahead of what you courtesy of Copilots and all the stuff that's currently coming out, like the best new thing. If you want to be ahead of that, just join the waiting list for being a design customer. We do have some expectations for design customer, so you really depends on on the system. You will really need to have experience in automated testing, and you really need to be dedicated to actually give us your time, that you really go into the system and change it.

[00:32:25] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a496. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:33:01] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

SimpleQA, Playwright in DevOps, Testing too big? TGNS140

Posted on 11/04/2024

About This Episode: Are your tests too big? How can you use AI-powered ...

Mudit Singh TestGuild Automation Feature

AI as Your Testing Assistant with Mudit Singh

Posted on 11/03/2024

About This Episode: In this episode, we explore the future of automation, where ...

Eli Farhood TestGuild DevOps Toolchain

The Emerging Threats of AI with Eli Farhood

Posted on 10/30/2024

About this DevOps Toolchain Episode: Today, you're in for a treat with Eli ...