Digitial Testing Equals Testing of Digitalized Processes with Tobias Müller

By Test Guild
  • Share:
Join the Guild for FREE
Tobias Muller TestGuild Automation Feature

About This Episode:

Today, I have a fascinating conversation lined up with Tobias Müller, the visionary founder and CTO of TestrResults.io. In this episode, we'll explore the complexities of test environments and innovative solutions in test automation.

Tobias shares insights on handling complex enterprise cross-domain automation testing and practical techniques for streamlining test automation. He also explains why relying solely on code-based methods might be limiting.

We'll also discuss the significance of UI versus API testing, dynamic software elements, and the challenge of universal approaches in automation. From the limitations of traditional tools like Selenium and Appium to the promising potential of computer vision techniques and AI integration, this episode is a must-listen for anyone looking to push the boundaries of test automation.

Get ready for expert advice, industry anecdotes, and a thoughtful discussion on the future of digital testing. Listen up!

Exclusive Sponsor

In this interview Tobias also told me that he will be hosting a webinar on December 5, 2024, focused on reshaping test automation.

Titled Digital Testing Applied the session will address the growing complexities in testing full digital business processes—from initial user authentication to API calls, OS-specific interactions, and even PDF and file operations.

This 30-minute webinar offers a structured approach for software testers and automation engineers aiming to manage these challenges across web, mobile, and desktop applications.

During the session, Tobias will share strategies to reduce test maintenance costs, detect bugs earlier, and create more efficient test processes. The presentation will emphasize customer-centric testing, showing participants how to implement approaches that prioritize end-user experience, reduce testing times from weeks to hours, and ultimately lead to more reliable software releases.

Highly recommend you register for this webinar NOW.

About Tobias Müller

Tobias Müller

A recognized thought leader in test automation and software quality, Tobias regularly shares his expertise at major industry events, including AutomationGuild (Top 10 Speaker) and as AI training lead at Malaysia's National Training Week.

With over 25 years in software development, including leading projects up to €160M, Tobias brings deep expertise in regulated environments and complex testing scenarios. As the founder of TestResults.io, he and his team revolutionized the testing landscape by introducing groundbreaking technological approaches to automated testing.

You might know Tobias from:
TACON Conference keynotes
Test Chat discussions with Brijesh Deb
Expert panels alongside Michael Bolton, Paul Grossman, and Larry Goddard
TestGuild Podcast with Joe Colantonio
Featured expert in the book “Test Automation Awesomeness”

Connect with Tobias Müller

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:00:35] Hey, do you want to discover a new way of thinking about automation? Well, you're in for a treat because today we will be talking all about digital testing, which equals testing of digital processes and a bunch more. I think this episode is going to blow your mind. If you don't know, we're joined by Tobias Müller, who is the CTO and co-founder of Test Results.io, he's a pioneer with over 25 years in software development and a decade specialized in software testing for high stakes regulated environments, which is critical because the proof is in the pudding when it comes to automation and especially at enterprise level. He really knows this stuff. Tobias has led projects to get up to 160 million and redefine the testing market with Testresults.io which brings a fresh, technology driven approach that sets a new standard for really automation in the way they approach it. If you haven't checked it out, you definitely want to check it out using the link down below. Let's dive into his unique insights and vision for the future of testing of what is digital testing. You don't want to miss it. Check it out.

[00:01:33] Hey, before we dive into this interview, I want to show with you something that Tobias mentioned. He said he's going to be hosting a webinar that takes a deep dive on everything we talk about in this episode. The name of the webinar is called Digital Testing Applied, and it's going to address the growing complexity in testing full digital business processes from initial user authentication to API calls, OS specific interactions, and even PDF and file operations. This 30 minute webinar offers a structured approach for software testers and automation engineers, aiming to manage his complexities of challenges across web, mobile, and desktop applications. And during the webinar, Tobias mentioned he's going to share strategies to reduce test maintenance costs, detect bugs earlier, and create more efficient test processes. The presentation will emphasize customer centric testing showing participants how to implement approaches that prioritize end user experience, reduce testing times from weeks to hours, and ultimately lead to more reliable software releases. I highly recommend register for Webinar now using the link down below and also take a chance to check out TestResults.io while you're there.

[00:02:40] Joe Colantonio Hey Tobias, welcome back to The Guild.

[00:02:45] Tobias Müller Hi, Joe. Thanks for having me back.

[00:02:47] Joe Colantonio Yeah, it's been a while. You're always working on things. You always kind of rethink different things for automation. I was really intrigued by this concept of digital testing. It's one of the first times I've heard of it. I guess, before we get into it, like what is digital testing?

[00:03:01] Tobias Müller Yeah, digital testing is I think it's a new term. You haven't heard that a lot beforehand. It's about testing software in context, like in processes. I mean, there's a lot of digitalization still ongoing in enterprises, so they are still discovering their processes and trying to get them kind in software in a way. And a lot of people still test software these days in silos, so they test individual software elements and individual software, but they never test a full process like the value stream, which is what you typically call enterprise and in banks. And that is actually a problem. And we said, okay, if we do automation in testing, so what would be a challenge if we switch that over to digital testing and said like, okay, how could you test multiple different software pieces that are along the process chain and how would we do that from a technical point of view?

[00:03:49] Joe Colantonio Nice. How is this different than what people conceive of automation already?

[00:03:53] Tobias Müller The thing is if you look at automation today, I mean, I just saw a posting about Selenium being 20 years old, right? And that is more or less the thing to automate if you have web pages. But that is exactly also the limitation. Like you have a technology and you automate the web page. We invested a lot in learning on the automation. Then you can test that page is lovely, but we all know that there are still desktop applications, native applications out there, even they are getting replaced by their page applications. They are still out there and there are databases and there are mobile applications. I mean, if you think about the just a stupid log in, if you think about that. I mean, most of them are already two factor authentications anyway. So you log in on the web, but then you get your second kind of code on a different device. You need to take on a different device. If it's your mobile phone, you would need to switch to Appium. It's always second tool in the chain. Then have some desktop applications as well. Then you need to come up with another tool or can use Appium for that. And I think that's part of the problem. That is that automation today is like you do have an application, so developers are creating an application and they are doing unit tests, they are already doing automated testing and there's a lot of faults going into how can we do like end to end tests, but in the sense of like, okay, how can we test from the front end to the back end and then just to the database? But that is only affecting this one single application. But nobody is using a single application these days. Just nobody. I mean, even if you use WinWord, for example, like 20 years ago, you used it and you just printed out a sheet of paper afterwards. Today, you actually create a PDF file, so there's always a second piece involved and then you send it, I guess via email. That's Outlook or Gmail or whatever involved. There is just already three different kinds of software involved in a little process that is writing a document and sending it to a friend.

[00:05:35] Joe Colantonio Yeah, absolutely. And like you said, Selenium was made just for browser automation, which is cool. But I'm always confused when people say that's all they use because I'm used to working at enterprise companies like insurance and health care, and they have all these applications more than just a web browser. And like you said, they jump context a lot. Why is it, what industries should think of more probably see this type of challenge? Or do you think a lot of people just try to cobble some together and give up on really automating the full end to end workflow?

[00:06:03] Tobias Müller That's what you actually see is like they give up. I mean, they try to automate everything, but then they fail and then they go back to exploratory testing, they sometimes called manual testing. And they openly state that as opposed to say, like we did automation. We tried two time, three times already. It never worked out. All business users have to test again. That's the only way that works because we lost a lot of money and that is not what we are going to proceed anymore. We do have the business users. They just spend a few days on testing that. We are back to a happy path testing. This is the functionality that I urgently use. Does that still work? I think that's just not where testing should go. And that is not your testing should be. And I fully get the difference between testing and checking and the automation and the stuff. But nevertheless, there will be more and more software out there and it just needs more manpower. The manpower is not existing, so we need to automate. There's no way around automation and then we check ourself in silos like just being able to test web pages. And I think that is the biggest pain for all of those because management is like, okay, automation is possible. I mean, look at robotic process automation is don't everybody out of the water. So everybody is aware that automation can work in a way, testing is a completely different beast. I agree to that. But nevertheless, people expect that automation just works and it works about software like borders, that you can just automate what you see in front of you. Like why can't I automate what I'm using today? I'm just back from a prospect actually, and they use like a core banking application and then they have the trading application and the trade is going to the core banking application. You know what I mean? It's 2024 and they test manually in one of the biggest banks over in Switzerland.

[00:07:36] Joe Colantonio Is it an education thing or is a convincing thing? Because I see a lot of things on LinkedIn that there's no such thing as automation. And it's not testing, I don't know. I never followed it. Maybe because I'm an old dude and it's like I hate when people change words and meetings.

[00:07:49] Tobias Müller Yeah, exactly.

[00:07:50] Joe Colantonio Is there confusion and what is automation then?

[00:07:53] Tobias Müller I don't think there's a confusion. I think there's just a lot of I mean, it's like a philosophy thing. It's like we had big projects, like there are six people in the room and there was a specific question and you had like 20 different opinions, but none of them gave an answer because they were not educated on that. I don't mind if it's automation or if it's checking or if it's testing or whatever it is. The problem is people are out there are not able to automate their workflows. That is the pure problem. And they have different software, so they have different frameworks to try it on their own with Selenium and Appium because they open source and naturally they are free and you don't have to invest anything. We all know that that's not the case, but that is what people actually perceive. And then there are software windows that actually add on top of that and have their studio solutions or whatever to make it even more easy. And they still have the same problems. And then there's stuff like Selenium that auto heals itself, which firmly doesn't fulfill the promises all the time. And that is actually what you have to ask yourself, is that really where we are in technology, in testing in 2024? And my answer to that is actually since 2018 already, it can't be. That cannot be state of the art in testing. It just cannot be. I mean, for developers, we have millions of different tools that actually make the life easier. And in testing, it seems like it's ridiculous. It's like you need more time in automation than you would actually need to just actually just test cases manually or just do exploratory testing. And that cannot be the state of the art. And that just what I'm fighting against, to be honest.

[00:09:17] Joe Colantonio No, it's a good fight for sure. Since 2022, AI's been around for a bit like it really took off. Has AI really accelerated this issue for need for more automation because more code is being written by AI and therefore you're to need to test more. Even to keep up, you're going to need to start automating a lot more of these processes.

[00:09:35] Tobias Müller I saw that statement from Jason as well, where he says like, Hey, we need more testing because there's more AI. AI will generate more software. So we need more testing because there will be more software. But in reality that's not the case, AI is not writing software. I mean, AI is like extending the line that I've just written in the code or is adding some fragments that are easy to add. But it's not that it's generating software in that sense. I know that just what everybody is telling everyone else, but that is actually not true. If you're using all of those tools, what you do is like you get a little help on being faster, so you get more efficient in writing software. But writing software is still for humans right now that might change in the future, but right now it's for humans only. And so there is more software, true because everything is software these days. I mean, every company more or less is the software company. I mean, if your core processes rely on software, then you are a software company because you need to be able to manage that software. It's pretty simple. And that means, there needs to be more testing, but there needs to be more reliable testing because what we see is automation is not reliable. And I think that is the fundamental problem is people trust in testing because they understand mean that that is required. Not everybody, but most of them understand that's required and then they see the burden that's coming on them and they need to find a solution to get all of that work done. And that is actually automation. And then they start with automation and then they have a setback on that because it breaks every two weeks. You automate and then it doesn't work anymore. And I know everybody will not agree with me and they have all successful automation projects, but I'm going to those sites, I'm going to those customers, and I see what they have. And I see that they do have like just a prospect or they had the whole management team on keeping up with the updates on the different testing systems, like keeping up with the newest version of Selenium migrating everything, keeping up with Appium, keeping up with Tosca, all of the different layers so they have their own team just to keep that stuff up. There's a lot of changes in automation and that is what we are changing right now because we say, okay, if a user can use an application, then we should be able to automate that, automate it easily, and get rid of all of this locator stuff that there need to go into code and need to be able to identify how to get to that element because you, Joe and I, if we see a field that is labeled with the email address, we enter our email address, it might be a fake one. That's a given. But we will enter our email address. And why can we not have test automation or automation and testing that does exactly that? Like there is an email address label and just find the right field that you would enter the text like you as Joe would go there and would identify the right field. And it's pretty simple actually, if you understand the basics of human computer interaction, I don't get why in test automation that is still not the case. People think like it's still the 80s and we don't have processing powers, but we do have enough processing power for all of those different AI models. If you think about those large language models, they consume tremendous CPU power. We can use small computer vision models that are already good enough for testing and can just put that into place where we need it in automation and just rely on that and have far more stable automation than you have today.

[00:12:42] Joe Colantonio I agree with the visual part. I just know maybe some people try back in the day, some like Sikuli and they're like, man, not really reliable or some sort of OCR type of thing. Maybe this is a good point to talk maybe about technologies and maybe methodologies that you as a founder of TestResults.io will use to help not really digitalize the processes more effectively or efficiently.

[00:13:04] Tobias Müller I mean, in the end you just have pixels, right? Like everybody else and Sikuli failed in multiple ways. Actually, you just did an image search and try to match that and over promised in the end more or less. And what we do is completely different to that. We still use pixels. In that sense, we still have images. But the idea is that we don't try to find an element or that we try to identify an element based on what you tell us. It's more or less like we understand how humans work and how they actually behave in interacting with software. And one simple example is like if you have a label that says like first name and you have a text box to the right, that can enter it and you have it below the label. You will always enter it on the right side, so you will always choose the element on the right. Why? Because you learn. So in the culture that you grew up, you learn to read from the left up to the bottom right. It's pretty simple. That is your priority in how to fill in stuff. You learn that. You're not aware of it, but it's how you work. And it's pretty simple to model stuff like that these days in actually small models, and you can just interact with applications in a natural way. And there's a lot of different stuff. If you think about like how much problems tables still faced like for test automation. And it's extremely simple, as a human, you understand even without having any kind of frame around that you see by proximity are there are different columns, those are rows and you can easily handle tables without any training at all. And automation should act the same. We do have enough processing, we do have enough computing power. It's actually, it's not simple, but it's doable from a technical point of view and you just need to do it, that's all. And we just need to step up a bit from what we know, from having you to have those locators. And I need to find an Xpaths to identify the text field that is there for the first name and it's in an iFrame or whatever. Okay, congratulations. Now you found the way to get to the text field. The next application you test is a mobile application and it doesn't work at all. Find a universal approach. And universal approach is actually like how your end user would automate that.

[00:15:00] Joe Colantonio Yeah, great point. And especially when people need a test like these package base applications like Salesforce or SAP where they generate like random ids or someone uses a shadow dom and they're spending all their time developing a way to get at these fields to make it more reliable. This sounds like a way that maybe gets around that. Am I hearing that correctly.

[00:15:19] Tobias Müller Yeah, completely, completely. Specifically, Salesforce. I mean, Salesforce is an application where a lot of people claim like we do have frameworks for Salesforce. But the question is why? Why do you even need a framework for Salesforce? It's so natural for people just for humans to use it, actually. You just enter the stuff and that's it. Why do you need to dedicate framework? The same for SAP, the same for other technologies like that. But you always said like they can be automated easy If you need to have a dedicated connect up to that. We need to connect to those systems and understand how the UI is created and stuff like that. And you are right, Joe. The thing is, in UI, specifically in the web, you do have a lot of different frameworks and they are generating those ids on the fly because they are used for styling for different stuff, but they are not used for testing yet. True, you can add the testing ids, but most of the time that's not the case. It's not done. And even though it is done, sometimes it's gets forgotten and then you have the next problem. It's always a catch up game. You always tried to find a solution to a problem that you initially created by yourself. We say, okay, test automation needs to rely on the code. Okay, fine. And now we created a problem because we cannot navigate in that code. Why can't now get rid of the approach of relying on the code? Because that makes us accessible, universal, so we can access any application. If you find an approach that doesn't rely on the code, we can find an approach that works for any application. And that brings us to the point where we are in digital testing that you can actually test the whole value stream. So you start in one application, you just continue naturally in the next application, you switch over to some kind of web application. You switch back to a mobile application that sends you some notifications where you need to check the stock or whatever, and then you continue on the next application, like just in a natural way, like you're using it. I mean, if you go to a trading floor and you look at those 8 or 16 screens they have in front of them, it's a lot of different applications. So like the railway control system, I mean, they have 8 different applications running at the same time and they just work in cooperation. You need to test all of them together.

[00:17:17] Joe Colantonio All right. Now we touched on a few times how, though, how does TestResults.io's approach to testing go through these interconnected workflows. So because it's image based, it's able to run on Linux on Mac and like how does it help though? I know it's vision based. Is that why? Because it's vision based, It's able then it's interact all these other systems because it doesn't matter on the OS it's running on or any of that type of information?

[00:17:39] Tobias Müller Yeah, that's the point. The thing is, even if you're using vision or if you're using visuals, it's still matters on which operating system you run on. You just need to look at browsers, right? Firefox, for example, renders text with grayscale surround it and edge and chrome renders it was with rainbow colors around it and stuff like that. You can just abstract on that because then you don't notice it. There are simple tricks in perception how you can actually make it perceive easier in a computer vision model. It just like you do subsampling and stuff like that, you just supersampling. You have a lot of different techniques that are out there for game development. You can use all of the modern techniques in test automation and that is actually what we apply. So I won't tell you exactly how it works.

[00:18:20] Joe Colantonio Right.

[00:18:22] Tobias Müller I can just tell you it's a combination of different algorithms and different computer vision models and it's in the end is pretty simple. If you think that our example is like most of the time, you will interact with something that has some text around it in your proximity, so you need some kind of computer vision model that is able to detect text. To make text distinguishable so that you can see, okay, there's text, just text and there's text as well. And afterwards you apply the different the computer vision model to identify, okay, what kind of elements can I detect close to that text? And then you apply some kind of behavior model based for their capability. For somebody from that culture, what would be the most probable element to choose? And all of that combined, you can actually call it a multi-how is it called? A multi mode AI agent, stuff like that. That's just a new terminology for that. In the end, it's just different computer vision models that are specifically trained on those tasks.

[00:19:19] Joe Colantonio It's interesting. I know Claude just came up with the Multimodal life automation. Is that basically trying to use visual? Is that what it is? It's calling an AI agent, but all it's doing is using a visual type of algorithm to get through the workflow?

[00:19:32] Tobias Müller I don't know what they do, to be honest. I just saw that they have a success rate of 23% and that is why I said, I'm not interested right now. And if you look at their demos, it's pretty interesting because they are happy if they can open an Excel file, read the text in that Excel file and then put that in some form. It's pretty nice if you look at those demos closely, they don't need to scroll, they don't need to do any selection, nothing that is a complicated interaction, right? And the next part is they don't position it as a testing tool, their position as an alternative to robotic process automation. And as all of the big RPA players learned, testing, the robotic process automation is part of that, but testing is a lot more around it.

[00:20:11] Joe Colantonio All right. Just open up a memory park with me. At one time I needed to test somehow for some reason an Excel file to go to a browser or do something. I forgot what it was, but it was almost impossible. It was a weird like office API type thing that never worked. This type of technology, once again, it's not just for browsers. It can go to PDFs because it's visual it goes to pdfs, it even go to a Microsoft Excel file, Microsoft Word. Am I hearing that correctly, is able to interact with the texts and things like that also and invalidate it?

[00:20:40] Tobias Müller That's what they typically do because that's what's typically in the workflow, right? You do something. I mean, entering leads in like for example, you mentioned Salesforce, entering leads and that is pretty interesting. But in the end, you have some kind of dashboard that shows you actually how many leads you converted successfully. And just numbers like that, like APIs like that. And they are all represented graphically so that you as a human user can easily understand that, need to test it as well. And there are most of the time you have reports and stuff like that. And yeah, naturally. I mean the approach, if you go visually, then you can test all of that stuff and that's what you do. Typically if I give a demo, I actually show to the prospect exactly that, like doing a logging on the web page and doing the two factor authentication via a confirmation code on the mobile phone, then going back to a desktop application and then opening some kind of PDF invoice and showing actually how to check the invoice.

[00:21:28] Joe Colantonio All right. So once again, I just want to make sure people aren't getting confused here. When they have visual, they may think of other software or other vendors that use visual validation, but they're not really able to interact with elements. If I had a web page and I had a chart and I needed to validate the chart but also interact with the chart, this is different because this sounds like not only will it validate the chart, but also you can interact with the chart because it's using visuals. Am I hearing that correctly as well?

[00:21:53] Tobias Müller Yeah, to be specific on that, it is not visual testing. Just to mention that one. And there is a big player, we all know them. They are doing visual testing and it's exactly the same problem, right? You compare screenshots in the end, but you need to get some way to get to the corresponding screenshot first. And I guess that vendor is doing it meanwhile as well. I mean they do have their compute platform and stuff like that, so I think they got the limitation and fixing that, but that's it. We are not doing visual testing. It is not about comparing screenshots in some kind of clever way. It is really like having visual cues, having visual aids, then use those visual hints to actually navigate applications and do verifications within these applications. And the verifications can be that we are checking for text in there. We are checking for other visual hints in here and stuff like that. So we can change all of them. And the interesting part is because you mentioned that OCR and that is also a topic that is always coming up if you're doing OCR. I mean, OCR just doesn't work, right? It just doesn't work. You can see that all the time. You can even see that on your iPhone. I mean, if you go to an Apple store today with one of their gift cards and I did that to buy a new iPhone, actually, they need to scan the text on the gift card. Apparently, they don't have QR codes on those gift cards for whatever reason. And if you start scanning that, actually, you will notice that sometimes just kind of just doesn't get the correct code. If you don't get the whitespace around the code, correct. So about the text that represents the code and you're scanning a bit more, actually, the scanner is completely confused. So that just a state of the art, OCR. It's just not meant for low resolution, colorful text. It just doesn't work like that. And that is also something where they just need to come up and see like you as a human, how would you do that? So if you read some text and it might be that you read it wrong, right? I mean, it might be that the villain in the background that you confuse that with like whatever an I, a capital I but nevertheless you get the context right. It starts with a VI, A at the end. It's capital I, capital double I, doesn't make much sense. You correct that. And that's just the dictionary. Typically, as you're coming from the regulatory environment MedTech, you can't use dictionaries because there's a lot of different stuff like typing and screaming for blood types. You don't have to play the dictionary for all of those different type in-screen text tests that can change the name as well. You need to come up with something else. And that is actually where we came up with reverse OCR, where you can actually say like, okay, we get the following text from OCR, but we were looking for this text. What is the probability that this text that we got from the OCR, which is wrong, but still the right text that we expect on the screen because that is what you do as human as well. I mean you are abstracting from what you're seeing on the screen, you see pixels and from that you, create some contours and based on those contours, you better match actually do what you know, what you learned in the past, as always, and that is how you actually work and that is what we resembled was digital test.

[00:24:44] Joe Colantonio All right. So you probably get the same resistance that AI products do as well about how do I know it's really doing what it's doing? It's using visual methods, ways to interact with the application, and it's able to get past probably a lot of these things that like a misspelling or something. How do you know then that it's doing what you thought it's doing? I assume some sort of reporting is creating up in that shows what it did and how it got around it?

[00:25:07] Tobias Müller Yeah, there are two parts to that. One is actually that we check every single interaction with the application is actually verified every single step. And you can imagine like if I need to enter my email address somewhere and there's a confirmation code somewhere else and I put in the wrong confirmation code, I will not be able to log in. If I expect to be able to log in, it's already failed that's the if I do the verification all the time. And the second one is induced failures. Most of people don't do that. And I only heard Bass talking about that, to be honest, is like you say, I do have a test case and now I do have a runner that is executing the tests and the runner can actually induce failure through the test case. I keep everything that I do with asserts or what I expect, like the verification points, I keep them the same. But if I do have an interaction, I just change up a character. The expectation would be that the verification afterwards failed. And that's exactly what we do to prove that the test is actually work is like we let them fail. And if it doesn't fail back then, then your test case is just not a valid test case, it's still open.

[00:26:06] Joe Colantonio Absolutely. I guess I'm just thinking of objections here. A lot of people have invested already in the test in infrastructure like Selenium grids or whatever, and they hear, this is completely different. I can't use it against I have to buy in not only to this tool, but a whole other way to to run the infrastructure, to scale up. How does this work? How does it scale up? What would you do need a different setup or how does that work?

[00:26:28] Tobias Müller Yeah, so I hear that every single time because typically people do have some automation. They are not happy with it. They are spending a lot of money on that. Most of the time they are not using it, but they have it. You always have that objection. And the thing is what we say is, QA does this, but for sure you do have gaps in your automation, right? You do have PDF reports or you have some kind of dashboards or do you have some native applications that you cannot automate. Start with that first. Try it out if it works for you. If it doesn't work, okay, it's the wrong tool for you, that's fine. But if it works out, you can do that step by step. And how does it scale? Yeah. You don't need Selenium grid for that anymore. What we do is there are two different ways of, like, scaling. One is because one of the problems in test automation is also test environments, right? I mean, you need to have an environment that you can test against and most of the time that's not a single PC, but that is a network of different server systems and stuff like that. And you try to resemble that actually, so that you bring up a virtual environment, including multiple different systems, to actually resemble the reality and then work against that. And then it's how it scale, so we can I mean, cloud computing is cheap these days as well. And it's actually I guess it's a misuse of cloud computing how we use it because we use it for testing. We spin up 100 VMs for one environment. We tested life for 5 to 10 minutes and then we bring it down again. And if you're familiar with Microsoft Azure, for example, and that's just one example, it shouldn't be advertised for them. They do have some boost VMs, they are pretty cheap and they are boosted for the first 15 to 20 minutes. You get double the amount of CPU power and you pay, I guess, a quad of the actual VMs. So you can use that for tests because you only need for 15 to 20 minutes for a test and then you kill them again, you just throw them away. And that's actually how we scale. Because for visual testing, I mean for Selenium, Selenium base testing or code based testing more or less, like you can use headless systems, you can just start them in your Docker container, have them in the repository somewhere and scale them to a million. For visual testing, you actually need some kind of visual representation that you can test against.

[00:28:22] Joe Colantonio All right. So once again, I'm just trying to think of things that people might have experience with. Let's go with the protocol was LoadRunner that I used to use for the client applications. And if the resolution was exactly the same, it's going to fail. If people have experience with that, how does TestResults handle that differently? Can you run it on different resolutions or do all the machines have to be exactly the same resolution in order for the test to be reliable?

[00:28:47] Tobias Müller And so that's typically a scenario. You do have different resolutions, right? And that is why it failed in the past. And I think that's exactly the answer to that. Just like, hey, that is in the past, like 15 years ago, we didn't have enough CPU power to do that. I mean, we did some edge detection and try to overcome to do that edge detection, but it didn't work out well. These days we do our feature extraction. We do have a lot of stuff that just I mean, look at that. Tesla is not a full self-driving car right now. I guess we can all agree on that. But the perception of the environment that it has is is brilliant. And you can do that in real time. It couldn't have done that like 20 years ago. And that's exactly what we do in Computer vision as well. There's a tremendous different technology now than we had like 20 years ago. And that is actually what we applying to, to testing more or less like. It's just like you have so much better hardware just standing on our desk that nobody's using it. We are still parsing code and trying to get access to some HTML code. It's not required anymore. I understand that 20 years ago that was the requirement because we didn't have enough CPU power. Pretty simple as that. And that's gone. And if you compare today's like computer vision model to matching pixels like 25 years ago, you can do that, but the experience is different. So just give it a try. And if it doesn't work, it's not for you. That's okay. But just open up your mind. And the thing is, what I mean, there's one objections is a lot of people don't understand the approach. They are so used to finding a locator that gives them access to an element that they don't understand, the approach that you don't need that. If you tell them, hey, just work naturally. Just select a text that you would actually use as identification for the entry field and that's it. Work like you would do if you would just be an operator of the application. They're extremely trained on I need to get a locator somehow to that element.

[00:30:37] Joe Colantonio Right. And that's like the only way it's going to work. So it's almost a get over that barrier because it's funny, you said over 20 years ago that's what I was dealing with almost 25 years ago. That's scary. You need to learn. You need to evolve. You need to try new things, I guess, as a tester.

[00:30:51] Tobias Müller And that's fine. Automation should change, but somehow it doesn't change. It's always the small steps. And now we do have we do have Gen AI and we can generate a million of test cases. They are all just scratching root the surface. Nobody cares. It's a lot of marketing and that is our biggest invention and testing really. We can generate a million test cases with Gen AI either just scratching the surface of the application. I mean, it can be like that.

[00:31:12] Joe Colantonio Absolutely. How about if someone takes the extreme, say, Hey, look, we don't do UI testing anymore, it's unreliable. We just do API testing. Is that something you ever hear or how would you say?

[00:31:22] Tobias Müller Richard Bradshaw actually had a good talk, like in 2019, I guess he said, like everybody is talking about testing pyramids, testing, ice cones, and stuff like that, and they are taking it as a strategy, meanwhile, that they want to do less end-to-end testing, less UI testing and more API testing component testing and stuff like that. And he had a good point in that talk back then that already I mean that's 5 years ago where he said like, hey, just adapt to the technology that you're using. If you have a pure API, visual testing doesn't make much sense, right? Because you have an API that you can test. But if you do have some kind of user interface, is it enough to test the API? No, is it enough to test the UI? I would claim it can be, but in reality, the thing is what you want to do is a combination of both. So you want to combine these UI tests together with API tests in the same testing scenario. What I hate and what I see a lot out there is like people using Postman for API tests. They are sending some requests, they are checking them if I get a 200. Okay. Everything is brilliant. That is amazing. That's not how an API is used as well. I mean an API is used in the flow, so you get some data from the rest API. You go to the next call, you provide the data to the call and stuff like that. And that's the important part. It's like the whole software is always in the flow and somehow testing seems to be like static and it cannot handle dynamics. And that is exactly what you see. I mean, if the Dom is is modified dynamic. I saw a web page last time that is generating like the whole Dom dynamically all the time, it's pretty difficult to get a locator. And that's the case for all of the stuff is like if you're sending API requests in Postman, you are doing a good job. That is actually brilliant. But the thing is, is that how the API will be used by yourself to develop and pretty sure it's not. You have an authentication stream in the beginning, then you have your bearer token, the easiest case you have your bearer token and then you continue with all of the different data that flows through the API to actually get the result that you want. And that is how we should test. And that I hope that answers the question, visual testing in the sense that we do. I mean it's functional testing is not visual testing in the sense of screenshot comparison. That's a given. But testing applications via the UI is not the Holy Grail, it's not the Holy Grail as well, but it's the bare minimum that you should do because your users will only have the UI. And having the testing pyramid and telling everybody like, Hey, you shouldn't do end-to-end testing, you shouldn't do UI tests or you should actually minimize that to the bare minimum. Yeah, that is okay. If you cannot have stable units, I mean that's the root of the testing pyramid. I mean it's in the end it's money and on the other end, it's stability. It's not the strategy to do it like this. It's like the result of the surroundings of what was given back then. And that is changing. I mean, you're here and not only me, all of the vendors and all of the testing community is here to change that. But I see a lot of people just sitting there in their chair. And yeah, I always use Selenium and I still use Selenium now. It's a force generation of Selenium 20 years later and a game changer. They have relative locators now. For God's sake! A relative locator makes it even more flaky. I mean, if you have responsive web pages, if I say, Hey, this needs to be to the right of the following element and I have a responsive web page, it won't be to the right all the time. It might be below, it might be a bot for whatever reasons, I don't know. But the thing is, we see a lot of changes in there that people call game changer, which doesn't make any sense. I mean, it might be in some corner cases, I give that to everyone. But as a testing or as an automation community, I would expect that we look at new technologies. I mean, what do game developers do? They have a lot of power in your GPU, What do they do? What do they apply? What kind of algorithms do they apply actually to make us engage and how can we use that actually to analyze screens? That is where we need to go is like, what should we do? And yeah, a lot of stuff should be unit tested. Honestly, I've never seen a project that people didn't introduce a single bug because of unit test. But I understand that unit tests are a basic requirement for modern software development. I get that. And so we do a lot of unit testing that is part of the automation, right? But the thing is, I personally think that your tests should be valued far more than they are today. And know the reasons, they are brittle, they are flaky, they don't work, they are hard to maintain. And all this stuff to think about if they would be stable and if they wouldn't be hard to maintain. Wouldn't they be the first choice to do because that is what the users would use?

[00:35:35] Joe Colantonio Absolutely. And Tobias, earlier you mentioned, hey people should try it, if it works so great. If not, don't. I know there are a lot of skeptics out there. So I'm asking you to take the Tobias' TestResults.io challenge you'll find a link for it down below and do what he says. Try for yourself. See if it works for you. If it does, learn more, because I think it's really going help a lot of people, especially at the enterprise level, where I know there's a lot of people struggling. Okay, Tobias, before you go, is there one piece of actionable advice you can give to someone to help them through automation testing efforts? And what's the best way to find or contact you?

[00:36:06] Tobias Müller The best advice I can give as it's changing in comparison to last time is. Clear your mind and be open to new technologies. And I see that a lot of people and all of those discussions I mean just as we discuss about is it checking or is it testing? You can do all of that, but it's just taking time. So open your mind and be open to new technologies and don't learn how to code. Don't learn how to refract. Learn to understand how algorithms actually work and learn how to apply them to testing or actually make developers of their of testing. Make them proud that they can support you in your testing endeavors, but get out of your chair and get the newest technologies to test.

[00:36:44] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/522. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:37:18] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.

[00:38:02] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Playwright Postman API Convertor, TestZeus, AutomationGuild 25 TGNS143

Posted on 12/02/2024

About This Episode: Do you know what must attend online automation conference is ...

Daniel Knott TestGuild Automation Feature

Removing Pain Points in Mobile Test Automation with Daniel Knott

Posted on 12/01/2024

About This Episode: In Today's special session, we are honored to have Daniel ...

Andrew Duncan TestGuild DevOps Toolchain

How to Building High-Performing Engineering Teams with Andrew Duncan

Posted on 11/27/2024

About this DevOps Toolchain Episode: Today's guest, Andrew Duncan, founder of Vertice Labs, ...