QA: Masters of AI Neural Networks with Trevor Chandler

By Test Guild
  • Share:
Join the Guild for FREE
Trevor ChandlerTestGuild_AutomationFeature

About This Episode:

Want to know how QA can leverage AI to help in your SDLC? In this episode, Trevor Chandler will share how to use AI in the context of how QA can achieve the next set of advances in the global world of technology. Discover real strategies to use AI as one of the tools in QA. Listen in to hear about this and other evolutions we stand on the edge of as testers.

Exclusive Sponsor

The Test Guild Automation Podcast is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

About Trevor Chandler

Trevor Chandler

Trevor is a private researcher at Artificially Intelligent who holds 18 patents in automation and artificial intelligence. He is also a testing expert with over 150 completed test projects across 65 companies. He also has numerous achievements in technology.

Connect with Trevor Chandler

Full Transcript Trevor Chandler

Joe [00:01:27] Hey Trevor! Welcome to the Guild.

Trevor [00:01:30] Hi Joe! Thanks so much for having me. It's great to be here.

Joe [00:01:32] Awesome to have you. Trevor before we get into it, is there anything in your bio that you want the Guild to know more about?

Trevor [00:01:37] No, that's very accurate. You know, I started automation when I was…very early when there were still analog automation. You know, we had XRunner and WinRunner and all kinds of some of the forefathers of the automation tools that I'm sure many of us have used. So things have really changed. You know, very exciting technology these days.

Joe [00:01:58] Absolutely. You bring up WinRunner. I loved WinRunner. That's how I learned. So good to have someone that actually has that experience as well. I guess Trevor, as I mentioned, you had been involved though in AI. It seems a lot longer than I think most people have. And so I'm just curious to know from your perspective since you have experience in both, you know, AI is a buzzword now in testing. And so just curious to know how real is AI advancements and how does it like with testing in general?

Trevor [00:02:24] So AI is a very interesting phenomenon in testing today because there are a lot of people that are using it for the sake of using it and they shouldn't be using it. So you have a lot of that happening. You also have some cases where it is highly effective, but the techniques and the ways to do that, there is not an abundance of information. It's not very easy to get started with that. So if we look at the specific things that AI provides today that I believe are highly advantageous to testing, we have things like reinforcement learning algorithms that instead of navigating their way through a maze or through some kind of physical trial, they're navigating their way through a web application. So instead of a maze and walls being their barrier, their environment is a web app and their walls are dropdown boxes and text appearing in places and things of that nature.

Joe [00:03:24] Nice. So I definitely want to jump into reinforcement learning. But you mentioned there are some people that shouldn't be using AI. What do you think of people trying to leverage technology that may not necessarily fit what they're trying to do?

Trevor [00:03:34] So I see a lot of visual approaches, first of all, where it's almost like the revisitation of the analog techniques. So in the old days when all we had was X and Y coordinates, we used them. We did the best we could. We made sure the resolution was the same, the screen was the same, our fonts were the same. And we hoped that nobody touched the mouse and that everything kind of worked. Well in artificial intelligence, one of the most advanced things that you can do today is object recognition inside of images or videos. So people have been a lot of the tools that are out there today are using that technique. So they're trying to use artificial intelligence to recognize dropdown boxes and text fields and things of that nature. And on one side, it works very well in some ways, just like analog did. But in other ways, it's very difficult. And one good example is, say I'm dealing with the dropdown box. If I open that up, it takes me one command and I can make that command from Selenium. I can make it from a checkpoint in QuickTest Pro. I can use artificial intelligence to do that, but it would kind of be overkill. And when you do a visual AI in that scenario, it's very hard to dynamically recognize a list of names that's opened up from a dropdown box. So you get into a lot of these scenarios where people are using a technology that in some ways works very well, but in other ways, they've reintroduced the brittleness of analog. The other problem is one of the great strengths of artificial intelligence is to be able to take a huge combination of input and then try every possible way. These things can be combined and learn from those, learn what is good, what is bad, what got us closer to the goal, what got us further away. And parts of machine learning do this very well. But when you do this with visual recognition, those machine learning mechanisms that have that ability to do all those combinations, they're not being used. So really, that's what you see mostly. The other thing that you see is people say things like, “Oh, well, I used artificial intelligence to pull all my CSS selectors out.” It's like, well, congratulations. But that was only one line of code, you know, in Selenium. So you just spent three months making this algorithm just for the sake of it being cool because it's artificial intelligence. So congratulations on that. It's very cool, but not very practical in our jobs.

Joe [00:06:14] And so I guess when you talk about visuals, you're not talking about visual validation. You're talking more about using visual to try to drive a browser.

Trevor [00:06:21] Yes. I mean, so the artificial intelligence will take all the pixels. It will put the pixels into an algorithm and then determining by looking at edges and corners and characteristics of pixels, it will determine if what it's looking at is what it thinks it's looking at, basically. So it's an actual computer vision process that people are using when they do this.

Joe [00:06:44] Got you. I was watching a previous presentation you gave on YouTube and you talked about the different types of AI, so the levels of AI. So I thought before we actually get into the weeds here, maybe you can set it up by what are the different types of AI so we know exactly what we're talking about when we say AI or machine learning in this context of testing.

Trevor [00:07:01] Sure. So there is a little bit of a difference of opinion on this. Now, some people call all the disciplines that have to do with learning with the machine and predicting, some people call that artificial intelligence. And sometimes that's easy for conversation to say that and probably good for us to do that in this call as well. However, when you dive in deep, what happens in artificial intelligence is there's a number of pieces inside of that there are two different things. Computer vision, for example, is everything about identifying objects and pixels, identifying actions and videos where machine learning is taking a bunch of data, looking at the patterns in there, and then making predictions based on that. So you have a wide variety of different types of artificial intelligence. And as they get more and more popular, they seem to break out of the artificial intelligence label and they're more often used by their own labels. So one school of thought, everything is artificial intelligence. The other school of thought is let's use the names for these more popular parts of artificial intelligence where generally machine learning, computer vision, artificial intelligence, natural language processing, those would probably be the four top pieces.

Joe [00:08:15] Got you. You already mentioned this, but I think you really hone in on the reinforcement learning aspect of AI. And then I want to talk about how that relates to automation testing. So what is reinforcement learning?

Trevor [00:08:25] The reinforcement learning is when the computer is basically trying a bunch of actions and each time they try an action, they check to see if it helped them get closer or further away from their objectives. And if they got closer, they get a reward assigned. So that's the reinforcement. They're reinforced with reward points. And if they didn't get further, then either no reward is given or you could even subtract reward. So it's this mechanism that forms reinforcement learning. And to know if you could use it, you need four things. You need an objective. You need a set of actions that in some combination you believe will achieve a goal. You need the ability to measure those actions and you need the ability to assign rewards. So if you have those four things, you can use reinforcement learning to solve a problem. And in software automation, we have those four things. In fact, we have them much better than almost any other use case in the world for artificial intelligence, because we're all about making actions and then verifying the outcome.

Joe [00:09:31] Absolutely. I guess when people hear about reinforcement learning, they think it needs a lot, a lot of data. So how many test runs would you need in order to make this actually worthwhile?

Trevor [00:09:39] So this is the interesting point because a lot of technologies had to come together to even make this possible at all because speed is a real problem. We have to run many, many times. And when we're doing web testing especially or anything that has UI, especially in today's day and age, with the heavy JavaScript layers, you have to be able to run very, very quickly. Now, fortunately, we don't need to run millions or any of that, but we do need to run hundreds, maybe, maybe even thousands in some cases if they're really complex. So what we do is we use headless mode. So when we do this on the browser, we engage headless mode, which some of the listeners may be familiar with it. Headless mode is a mode for all the browsers now that allows you to run tests in memory. But it's not just testing against the HTTP request and response like HttpUnit or JMeter. It is actually taking that whole DOM layer from the browser and simulating it in memory. So with headless mode, we can finally run all the iterations that we need to to train these models. So it actually works quite well. And that's the other point about it. Your test will run super fast because you are running at headless mode. And the way this works is you train, the reinforcement learning will train and meet your objectives and it will try every wrong way and every right way. That's the only way it can do things to know that it's found the best way. And once it does that, you end up with a neural network that knows how to execute that test. And so then when you run it again after the training, it runs it immediately. And especially if you're running in headless mode, you can run thousands of tests as fast as you would run unit tests, so you get a speed advantage out of it as well, which is very nice. Plus, from a test coverage perspective, because it's trying all these different things, you get a really excellent test coverage. Now, your test coverage is more than just trying all the combinations. You have to go across the different techniques. So you have to make your test cases in certain ways where these techniques, the goals, and objectives you give your artificial intelligence, are achieving the different types of testing techniques we might use like boundary testing, stress testing, feature testing, end-to-end testing, etc. So you do kind of have to get crafty there and make sure your objectives are exposing enough types of testing. But it gives you that thorough combination automatically in a way that I've never been able to be that thorough myself.

Joe [00:12:17] So how do you reward it, though? Is that a manual process like it runs through a thousand? How do you know? Is that like an algorithm you set up ahead of time? How much time do I take the training and then reward it and know that, yes, it actually did the step and that's actually valid or it actually did the step and know that's bad.

Trevor [00:12:31] So and I'll give you some videos that show this in action, basically. But your asserts or your checkpoints, you put the reward assignments in those calls. So when your assert comes back, if it matched what you thought it would, that it'll assign an award and you can. What I usually do is I'll make three different assertion classes or types of checkpoints. I'll make micro ones that are assertions that are checking very small things like one field. I put something in one field and the value was accepted and it wasn't flagged as unacceptable characters or whatever. And so those asserts are very low reward, but there's still a reward. And then I'll make minor rewards, which are when I get through everything on a single page or everything for a single feature. And then major assertions are set up where they are, where I do, where I go across many pages and achieve an objective, or I've used many features together. So I'll set up the reward. Maybe the micros, I'll give them point three because we're dealing with between zero and one here, with one being a great reward and zero being horrible. Basically nothing, nothing good happened. And that may be what the minors I'll do point six and then the majors, I'll give them a full point. So in this way you could just kind of have a certain place, make calls to the right assert to assign the right reward. And you really don't have to worry. It's no extra work then what your assertions would typically be. But the interesting thing about this is you set these up in advance your assertions, and when the machine learning actually runs, it generates the code for the tests. So while it's training, it's trying all these things, it's assigning reward. But what it's doing in the back end is you've assigned your different types of asserts and you've defined your major minor and micro goals. And I actually have a user interface that I've created to make this easy. But you can do it in your code. That's the setup. Once you've done that, then the reinforcement learning is smart enough to know how to use dropdowns, how to use text fields, how to use links, how to use buttons, how to use checkboxes. So there's already like a library built up in Selenium that performs these functions. So it's trying all these different combinations. And when it tries the combination and it worked, it's trying these combinations by generating the actual Selenium code itself. So it's generating the automation and then the automation gets run. So that's kind of how this works, that the reinforcement learning is basically a system that's creating a huge number of little Selenium tests and then tracking how successful each one was.

Joe [00:15:12] So where does the training occur? Is it at current staging or do you check into the CI/CD and just say, “Okay, it's going to be learning over X amount of runs and then we know we can get valid results”? Or is it like a confidence level that you have in a staging environment before you check into CI/CD systems? How does it normally work?

Trevor [00:15:28] You would train it, you would set your goals and let it train and run on its own. And it might take anywhere from one minute to thirty minutes based on how complex or how many pages or features you were crossing. The training process is separate. Once you have your stuff trained, then when you actually plug it into continuous integration or continuous delivery, you're not plugging in the training process at all. You're plugging in the most efficient tests. So what it's training, it's creating all these different combinations and trying all these different things. But for each goal, once it identifies what the test data is and the steps it needed to take to meet that objective, like, let's say we're filling in, we're registering for a web page and filling in our profile, it might try all different types of data in each field until it gets one that takes and a picture shows up where the picture's supposed to be. Characters are in the name field, numbers are in the age field, etc., etc. When it gets to that point that optimize test. You can make it in different ways, just this is just how I do it. I save just that most optimized test. Now I have my test case done. That is what should get exported to continuous integration or continuous delivery. So you train on all your things. It outputs Selenium tests that are just the test case. They're not all the tries. All the tries and all the failed attempts, that only happens during training once. So what you end up with are your streamlined test cases that meet the goals or the objectives that you set up before you ran the training process.

Joe [00:16:58] Very cool. So you mentioned at the top of…well, we went over things of who maybe should be using AI. So what are some use cases where you think people can definitely leverage AI and get really good value from it?

Trevor [00:17:08] So if you have a huge amount of inputs that are going into a system, that's one of the best criteria right there. If you have a system with a huge amount of inputs and also if you have just a massive amount of test data, so much so that it's cumbersome to deal with, these are the places where A.I. really shines because it will take a huge amount of data. It will try it in every different place possible until it comes up with the data that works. And this is a very interesting point because when you first set up, there are multiple ways you can set up your test data and some of them require more of your time. Some of them require none of your time. So what I do is I take a test, I take all my test data, and I put it in a database. And I don't tell my artificial intelligence what data goes where at all. I let it figure that out all on its own. It takes you know, it takes more time to train that way, but it takes no effort on my part. Now, if you want to take less time on your training, you could say maybe like you might take a table in the database and say it goes with this page and then another table goes with this other page. And so pages in your web app or you're in your UI belong to tables in the database. And if you do that, then the AI won't go and try all the data. It will go right to that table and try it. And that takes much less time to train. But it takes more of your own time to tell the AI which tables go with which pages. And then you can go extreme if you want and tell it exactly down to the column. These are the ones that are acceptable. You could do it all the way down to the field level if you want. If you do that, your training time is going to be almost nonexistent. So that's one of the main considerations. If you have a lot of inputs and you have a lot of data, especially if you don't want to mess with the data or you don't really know for sure what can go where, those are situations where AI is very, very powerful. There's a very interesting byproduct and I think that our job as automation and QA I think artificial intelligence in about four or five years is going to start to change it fundamentally, because when you use an artificial intelligence training method to learn how to get through a bunch of features, you end up with a neural network that you can task and say, “Hey, neural network, you know, go fill in that 30 pages it took to maybe we're doing loan application testing, go get me qualified for a 30 year, you know, five percent fixed loan or whatever the equivalent of today's online payday loans are.” Or if you're in economics, maybe you're testing a finance system or one of the best bitcoin casinos at the time and you could tell the neural network, “Hey you know, remember that financial report for a monthly report that you know how to do. Go and run that for me.” So what will happen is QA will start introducing these. They will start creating these neural networks as a byproduct of their training that could actually be used in the applications to bring artificial intelligence capabilities into the applications under test. I know that's a wild idea, but we're in the right position. We have the right skill set and we will have these neural networks. Basically, we'll become experts in training these neural networks that know how to use applications like super users. So I see kind of some advancement there. So it's much more even than just how do I use it for testing. I see QA in general about to evolve to a whole new level thanks to artificial intelligence.

Joe [00:20:29] So that was actually my next question. But I was just thinking before I get to the next question is our test data. So you mentioned if you had a lot of test data, this would be a good approach. But can you try to be context-aware enough to know how to generate data for your application?

Trevor [00:20:43] You absolutely can. In fact, that's one thing that artificial intelligence today really excels at. And they call it classification. So they are very good at getting the meaning of different data, pulling the topics out, understanding the language, and either going into huge data repositories and pulling out data using very specific criteria or generating data. For example, you could take one or two examples of data that you like, and that artificial intelligence could take that and variate it in a huge amount of ways, but still keep characteristics about the data that you care about. Likewise, you could set artificial intelligence on a huge production database or a highly advanced test database that has a huge amount of data and it's so much that you don't even want to use it anymore because it takes so long to find what you're looking for. Or maybe your test data is spread across 20 databases of different places. Well, AI can reach out to all your data repositories and it can organize the data in different ways. It can organize it based on different criteria and with natural language processing it smart enough to actually pull topics out. So you could actually you could pull out data based on what is being talked about in the data and all kinds of other powerful combinations.

Joe [00:21:59] You probably freak some people out when you said that you see AI really evolving in four to five years to do some pretty unique things. So of course, I always get asked the question is AI going to replace testers? So what is the future you see for testers or QA or what skills are they going to need then in that four to five-year timeframe you mentioned?

Trevor [00:22:19] So this is the incredible thing about QA and about the future role that I believe they will play. So when we talk about developers and other coders, they create things and they always have they create things for people to use. That's very different than what we do. What we do is we create things that act like people doing things and then we validate those things. That is two completely different worlds. Now, if you compare those two skill sets to artificial intelligence, what's happening in AI is coding and things of that nature are starting to be replaced by neural networks and other structures. So if anything, the development need for development is going to start shrinking and the QA skills are a perfect match for what you need to use artificial intelligence to its fullest extent, which is you need to be able to define objectives. You need to be able to define actions. You need to be able to evaluate the outcome of those actions, and you need to map them towards complex goals. And this is exactly what QA has done for decades. So really, I think QA people will be very excited to find that we're really going into a new age where our skills are the skills. They will be more needed than ever. And instead of a situation where somebody may automate a test application so good that maybe they might consider not having the same amount of test resources anymore, even though we know there are all kinds of pitfalls to that way of thinking and it's not the right way to think about it. But with artificial intelligence, it's the absolute opposite. You get into a situation where the QA is now the master of the neural network, the master of the training of them, and the keeper of what they end up generating. And at first, it will be our test case repositories. But very quickly, people will realize much in the same way that RPA has just come out. RPA is nothing more than test automation in the business environment. And that's the first realization, that and DevOps and other types of I.T. automation. They are the same kind of thing. But QA will be in a unique position. First, it'll be their test cases that they'll create with these neural networks and using their QA skills in the AI spectrum. But then very quickly, I imagine it will do the same thing that happened with RPA. People will start going, “Hey QA group, you did such a good job on automating those tests over there. Do you think you can build up some RPA stuff for us?” And when they find out that we can do that and that we're generating these assets in artificial intelligence, they're going to go, “Hey, do you think you guys can generate a feature in the application for us? We want to create some tools of convenience for our users and that will be the next job.” So if anything, QA is going to become a huge, huge, important piece of the equation for advancing artificial intelligence and integrating it across all types of systems. At least that's my prediction.

Joe [00:25:24] Oh, yeah. So people think I'm nuts because it almost seems like a seesaw because before I was like QA was, you know, skills you really needed were more manual. And then people say, “Oh, no, you need to be a developer. You need to know all these hard-core algorithms and exactly like a developer.” Now, it almost seems like it's gone back the way it was before where it's really testing and QA, those types of skills are going to be leading the way. Am I over-enthusiastic about this, that maybe people should stop focusing on hardcore programming?

Trevor [00:25:51] I think you're absolutely right because soon there's not going to be a need for super hardcore programming. There's going to be the need for people to maintain the actual backhands of these systems. But we're already getting to a point where neural networks are starting to generate code with good accuracy and soon it's going to be voice-activated. Instead of asking Alexa to set your alarm clock, you're going to ask Alexa to create a program for you, and it's going to be able to do it better than we could as humans. And so I think we're really going to start leaning on our knowledge of quality process because quality process implemented quality process guidelines, using that as an approach to artificial intelligence, building deliverables out of neural networks or whatever the latest structure is, and artificial intelligence or whatever we end up calling it as the years go by, that's going to be where everything gets done.

Joe [00:26:51] Trevor, we talked for about 30 minutes. I can't believe I didn't ask this question. And I think it's probably one of the questions most people on the edge of the seats wondering about. And that is, how do you use AI in Selenium tests? I mean, are there AI libraries for all types of language bindings, or how do you incorporate AI into existing Selenium tests?

Trevor [00:27:09] So basically what you need to do, first of all, you need to use the Python language bindings. So the reason for this is because there are three AI libraries in the world that are at the top and nothing outside of these three comes close. Its PyTorch, which is Facebook, it's OpenAI, which is Tesla and SpaceX, and it's TensorFlow, which is Google. These programs and all the AI is written in C++, highly optimized C++ with some C, but they all have Python language bindings and there's no real reason for that. It just happened. So that's the way the world is. So first of all, you need to write your…you need to be getting comfortable Selenium in Python language bindings because then your Selenium and your artificial intelligence can directly talk to each other through Python. So that's the first thing to understand and really what you're doing. And on my LinkedIn, there's an article, it's called something like test automation is obsolete in the way you do it today. And it's not meant to cause any trouble or say that automation is, you know, is inferior right now or anything. I should probably change the title to be a little more friendly. But when you look at that article, it tells you exactly…if you're familiar with the page object design pattern or even the page factory, which are the two most commonly recognized, acceptable ways to do Selenium, if you have an understanding of either of those or even if you don't, that article tells you exactly where the Selenium hooks into the TensorFlow. And I'll send it to you, Joe if people ask for it. I'll send you a presentation and then some example videos. There's a tutorial called The Frozen Lake, and this is a tutorial where you use reinforcement learning to help somebody navigate through a frozen lake without falling in, get their frisbee and get back to the entry point. So we start there. You kind of do this. If you don't know Python, you do a basic Python tutorial and I have a link to it in my latest presentation that I'll send over to you. Once you do this, you start to understand then the next step, then the LinkedIn article takes it from there. So in a Frozen Lake, the environment was the frozen lake. The actor was the guy running around and the obstacles were we're trying to get a Frisbee and there are holes in the ice, etc. So the article on my LinkedIn tells you in that context how the Selenium test is going to be our actions. Our actions are clicking things, picking items from dropdowns, clicking links, pressing buttons. All those types of things are our actions. And the environment instead of a lake is the application we're testing. Maybe it's a web app, maybe it's a client-server app, etc. So these articles will walk them through how to do it. But in essence, in Selenium you build up your actions so you want some stuff to click buttons, stuff to click text, really the stuff that most everybody already has for their automated test. So you need those and then you need to build a variety of asserts. And I suggest building them in sets of three with each one having a different type of reward assigned to it. And like I said, I do micro asserts, minor asserts, and major asserts. So once you have that stuff in Selenium, then the artificial intelligence, the reinforcement learning part, which we use the specific discipline called queue tables. So that's the algorithm we use. We use queue tables from TensorFlow and there is a great tutorial on YouTube and the individual I can't remember his name and I'll send you a link, but he has a set of tutorial videos on some in Chinese and some in English. And if you do a search for RL Tutorial Treasure Hunt, you'll find his videos and one of them is a treasure hunt reinforcement learning and then one is navigating a maze. These are really good code bases that you go and grab and use that as your shell and hollow it out. And then the last piece that's left is hooking the calls to the actions into your web app instead of into the lake or into the treasure hunt or into the maze. And I have a codebase set that starts this. And in the videos you can actually see the code, so you can see the code that's being used. I've shown the code. I've showed the output that's coming out of the code. You can see the Selenium code also. So you can kind of see how it all is combined even in the videos. So between those tutorials and between the Selenium that you already have, of course, I apologize if you're not using Python language bindings, but really one language is not much different than the other these days. You will have to do that conversion, but chances are you're going to pretty much have everything that you are, everything that you need already.

Joe [00:31:57] Okay Trevor, before we go, is there one piece of actionable advice you can give to someone to help them with their AI testing efforts, and what's the best way to find a contact you?

Trevor [00:32:05] You know, LinkedIn is an easy way to contact me. Joe, I'll also give you by email. I don't know if you post the information from this somewhere and you could post my contact information so they can get it from wherever you post this. So that's a great way to get a hold of me. I'd say the best thing you can do, the number one actionable item, it's a mental connection. You have to understand that it doesn't matter whether it's a self-driving car and the actions that the AI's learning are driving in different directions. It doesn't matter if it's the frozen lake and our actions are moving around the lake. It's no different than the web application that you're testing. Your application under test is an environment, and the things that you do in your app are your actions. And once you realize this, you can go to any tutorial on the Internet that has to do with reinforcement learning or other types of machine learning. And if you have that mindset in your mind, you can use just about any AI out there, or you can quickly determine if it's useful or not for you by looking making that comparison between environments and actions. And from a practical perspective, I would encourage you to do a basic Python tutorial.

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Matt Van Itallie Promotional graphic for a DevOps toolchain podcast episode featuring CTO insights on the impact of Gen AI on DevOps, with guest Matt Van Itallie, supported by Bugsnag

CTO Insights on Gen AI’s Impact on DevOps with Matt Van Itallie

Posted on 03/27/2024

About this DevOps Toolchain Episode: Today, we'll speak with the remarkable Matt Van ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Sideways Test Pyramid, WebDriver Visual Testing and More TGNS115

Posted on 03/25/2024

About This Episode: What is a Sideways Test Pyramid in testing Have you ...

Frank Van der Kuur Mark Moberts Tatu Aalto

RoboCon Recap: Testing, Networking, and Building with Robot Framework with Tatu Aalto, Mark Moberts and Frank Van der Kuur

Posted on 03/24/2024

About This Episode: Today's special episode, “Robocon Recapp,” is about the insights and ...