About This Episode:
Are model-based testing and record and configure-based testing mutually exclusive, or can they be used together to provide a comprehensive testing approach? In today's episode, Matthias Rapp, a test automation and Tricentis veteran, and Shawn Jaques, the Director of Product Marketing at Tricentis, discuss model-based testing and record and configure-based testing. We explore the differences between these two testing methods and when to use one over the other. We also discuss how they can work together and how AI and data-driven testing fit into these paradigms. Tune in to learn more about these testing techniques and how they can help ensure the quality and reliability of your systems.
Check out Model-based testing in the cloud yourself now: https://testguild.com/beta
Learn More About
Get early access to the new Tricentis Test Automation SaaS offering:
About Matthias Rapp
Matthias is a test automation and Tricentis veteran. He implemented, developed, and sold test automation solutions internationally for many companies over the past 15 years. He recently was the GM of a popular and Tricentis-sponsored open-source project – SpecFlow. At Tricentis, he is currently designing and overseeing the creation of a next-generation product line as VP of Product Management.
Connect with Matthias Rapp
- LinkedIn: matthias-rapp
About Shawn Jaques
Shawn Jaques is the Director of Product Marketing at Tricentis, focused on SaaS-based test automation solutions. He joined Tricents through the Testim acquisition and has 20 years of SaaS and software experience in strategy, product management, and marketing at GitHub, BMC Software, and IBM.
Connect with Shawn Jaques
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:04] Get ready to discover the most actionable end-to-end automation advice for some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.
[00:00:20] Hey, it's Joe, and welcome to another episode of the Test Guild Automation podcast. And today, you're in for a special treat we'll talk all about model-based testing and recording configuration-based testing, two hot topics. I hear a lot of both from experts on why it really saves a lot of time. So I want to really go in deep with this. So we're going to go over why they're different. What are they? What would you use versus one another? Configuration-based testing may not be as familiar to you as model-based testing. How can they work together? And so we have two experts joining us to go over that. We have Shawn, who is the director of product marketing at Tricentis, which is focused on SAS-based test automation solutions. He joined Tricentis to test them. I love testing, so great acquisition with that. He has over 20 years of experience, so really great to have him on the show. We also have Matthias, who is a test automation expert and Tricentis veteran. He implemented, deployed, and sold test automation solutions internationally for many companies over the past 15 years. He recently was GM of the Popular Tricentis sponsored open source project Specflow and another favorite here at the Test Guild and he's currently designing and overseeing creation of the next generation product yet to be revealed as VP of Product Management. Really excited to have you both on the show, so let's get into it.
[00:02:08] Joe Colantonio Hey, guys, welcome to the Guild.
[00:02:14] Hey, thanks for having us.
[00:02:15] Thanks for having us, Joe.
[00:02:17] Joe Colantonio Great. Great. I always ask this is there anything I botched in your bio that you want the Guild to know more about?
[00:02:22] I wouldn't say just that we tried to have a lot of fun at work and laughing and interjecting some comedy is definitely part of the daily work.
[00:02:29] It's very international, very international work. So that's one of the things that I enjoy actually a lot too, to talk to many different cultures offices in so many places these days. So that's really enjoyable to me.
[00:02:40] Joe Colantonio Great. So I'm already starting kind of off the rails already. Shawn, I'm just curious, you worked at Testim. You were acquired by Tricentis. A lot of companies are acquired by others in the space. Any tips you give to people on how to really make a smooth transition so you get the best of the acquisition and the customers really see?
[00:02:57] Shawn Jaques That's a good question. I must say that going from a small company to a large company, I've worked at large companies before, and well, Tricentis isn't the largest. As IBM, where I was, it certainly was larger than Testim. And I just said, you just got to embrace the process and just be a disruptor. Just do the things that you think are the right things to do and the processes will work themselves out.
[00:03:21] Joe Colantonio Absolutely. All right. So I guess let's get into the meat of the topic here. We're going to talk about model-based testing, but I think a lot of folks may be not really familiar with recording configuration-based testing. I thought we set the stage, maybe set a definition for what we're talking about here? So Matthias, any questions or thoughts on what is how you would define model-based testing?
[00:03:41] Matthias Rapp Yeah, sure, sure. Let's talk about that. So model-based testing to me at least, right? Is in systems is the idea of producing reusable artifacts of automation, of different automation, bits, and pieces. So that's the essence of it. So whether it's a reusable function that you want to program, whether it is a technical layer that represents how you're identifying controls on your applications and how you're parameterizing .... You give it. And then what people you're driving it with, what parameters you have and variables you have with it. These are all layers and bits and pieces that you have to think about and go through as you assemble that well-fought through test case and model-based test automation to really segment all of these and puts these into specific bits and pieces you can plug and play with. And that's the heart of it. So that's what model-based testing.
[00:04:30] Joe Colantonio Now, I know a lot of people know a little bit about model-based testing, but I think there's some next-generation model-based testing where before you had to come up with the model and make sure you had belief in what the requirements were when everything was doing that. It sounds like almost you can use technology to sniff production and things like that. It actually makes the model for you. Is that something you've been seeing as like a next-generation type of shift?
[00:04:52] Matthias Rapp Yeah, really good question, Joe. So that's definitely where we currently see the trend going and also where we are going. So we are actively working in that area that we can basically generate some of these models from actual user inputs versus tediously assessing it after the fact. So definitely where everything is going A.I is actually enabling this and driving it. Making it possible these days. So I think there are some exciting opportunities ahead of us in exactly that area.
[00:05:19] Joe Colantonio Absolutely. So talking about A.I., Shawn, you worked at Testim, so a great segue there, but not AI-based question. I'm just curious to know how you would describe what is record-based testing?
[00:05:29] Shawn Jaques Yeah. So it sounds pretty simple. So you basically record what the user is doing within the UI of the test of application that you're testing. And you turn on the recorder or you start to record a flow. If you think of an e-commerce app, you might log in to the app or search for a particular product. You click on that product, you add it to cart. It kind of records all those actions and then put it into an editor that you can edit and those two ways that those tools do it. One is that kind of do the editing after the recording and the other is kind of do the editing while you're recording. There are pros and cons to both approaches, but testing is more on the edit after you do the recording. But I just say that there is kind of record in playback kind of 1.0 tests that were pretty static. They're almost like a video. And then there is kind of where the market has moved in this record playback area where everything that gets recorded is this unique, separate object. Then you can take action on or you can share across different tests, or you can delete it, move it and group it and share it. There's just lots of different flexibility that has happened as the tools have evolved over the last several years.
[00:06:40] Joe Colantonio Could you just explain a little bit more? It's not recording playback. It sounds like you're recording the app. It's doing maybe screenshots and it's creating a model for you. Then after the test is done, can you go to the image then and create a test based on the model it created? Is that off base?
[00:06:55] Shawn Jaques Yeah, it's a little less like the model-based testing in that you're not really seeing necessarily a flowchart of the way that a user would progress through the app. But what you are doing is following the steps that you click on. So think about a simple login process. You click the log-in button and then you enter your username and password and you click log in and each one of those steps is recorded. Each one of those tasks that you're executing in the UI is recorded as a separate step within a test case, and then you can add different things to that test case. So if you say, well, I want to add data to it, I don't want to just use the username and password that you have. You can configure it to use the data set that you have. You can also say, I'm going to save this little group of steps and reuse that and other tests that I do. You can decide that maybe I had a tab in there between the password and the user ID and the password part, and I want to delete the tab because I don't need that thing. So there's a lot of like manipulation you can do afterward. So once you get that recording down and it's in your editor, then you can do a lot of manipulation of it.
[00:08:02] Joe Colantonio Nice. Matthias, how would you describe then how model-based testing may be different from recording configure-based testing?
[00:08:09] Matthias Rapp I think the big difference is if we do what you were hinting on Joe before that we are basically auto-generating some of these test cases from observing users, for example. I think that's where it's blending a lot. It's also blending, I think in the refactoring space, those two approaches. So essentially what we are emphasizing with model-based testing is to have these bits and pieces and every single concern that it has or may have is bits that you can use and are simpler to the sequence. And the reason why you want to do this is because you want to go from one test case to many reusing all these things. And you also want to be able to maintain and update these things easily. That's the reason why you're doing it. But the smart record or record-based tool like testing is actually doing a lot of that. Bits and pieces creation as you record. Right. That's part of .... up landing. So basically, it generates some of these usable bits and pieces that you then can assemble from use case one to the next use case that you want to do with it. It might not do all of it. There is some complexity still in generating that model of the business logic, especially when you come to more complicated long process chains that would be very sophisticated ... need for that. So I think, while we are trying to spearhead another action, industry is not quite there yet, but this is where things are blending.
[00:09:27] Joe Colantonio Nice. Obviously, you want to be able to maintain and make your test more maintainable. So once hearing about a model, does it make tests more maintainable, less maintainable the same? Maybe we could talk a little bit more about maintainability, just model-based testing, help with reuse or that type of issue?
[00:11:02] Joe Colantonio And Shawn, any thoughts on reusability or maybe the code base Same type of?
[00:12:03] Joe Colantonio Nice. I've always heard people that use model-based testing. A lot of times it's easy to make changes because you just make it in the model. Automatically updates all the tests around and have to go through each test, finding all the objects and all those. That's one of the benefits as well, I assume?
[00:12:16] Matthias Rapp Certainly. So a single point of maintenance is what comes to mind when you say something like that. So that's the goal, one place where you propagate all these changes from.
[00:12:23] Joe Colantonio Awesome. So the question I get us all the time is there are so many tools out there. How do I decide now not only are there are a lot of tools out there, huge companies like Tricentis which are experts in this space, have multiple tools for multiple scenarios. So I believe you also have Tosca, which I think is more the model-based type of testing approach. And then you also acquired test testing, which is A.I mostly browser testing last I checked. So what are some test cases that would drive maybe using both these products or how do people know when to use one over the other? Do you have any rules of thumb that you try to help customers decide or what do I use?
[00:12:57] Shawn Jaques So yeah, this is one of the first things that we tried to tackle when we came into Tricentis to figure out how do we explain it to our sellers and explain it to our customers. The difference between when somebody would want one versus the other. And it really comes down to the applications that you're testing. So let's start with technology. If you're testing just a web-based application, then, that kind of fits in that testing model. Tosca does web-based applications, but they also do 160 other technology. So it is truly a robust tool that spreads across all the apps in your enterprise. So if you're thinking about what team might only be concerned with one application versus all these applications, we think of that Agile development team as building this customer-facing application that is probably iterating a lot and changing a lot. And they really like that record-based testing because they can quickly come in, record a test for their feature that they're building and have that serve as a gate to their CI. And that is really quick and easy to learn and doesn't require a lot of expertise in the tool. Whereas the Tosca users or those model-based testing that are going across these long end-to-end business flows that might start in a web app, but then go into SAP inventory or go into Salesforce or create an account or go into net suite to do the financials. Those kinds of business flows are much better served with a tool like Tosca that QA team that sits within I.T can model that application business flow across all those different apps.
[00:14:32] Joe Colantonio Sounds like an enterprise kind of grade, but I don't want to say that Testim is an enterprise-grade is that a wrong way of-
[00:14:38] Shawn Jaques Yeah, we think that Testim is kind of sitting within that agile dev department within an enterprise. But we're really not being the kind of end-to-end cross that long business process flow within an enterprise.
[00:14:51] Joe Colantonio So once again, I mean, keep bringing up Testim, but when I hear Testim, I think of A.I was one of the early solutions out there that incorporate API. And I'm just curious to know, we touched a little bit on A.I. How maybe you can help with the model, but I know A.I machine learning kind of thrown out there nowadays is just a buzzword. But can they be applied to maybe model-based testing to make it better or record-based testing? Matthias, any thoughts?
[00:15:13] Matthias Rapp Yeah, it certainly can and should and it must, if you ask me. So I think where these record-based test automation solutions that are out there right now and especially what testing is spearheading is how to really produce stable automation from the get go as part of the recording by a leveraging A.I. Right. And that's great because that takes a lot of the headache away from unstable recordings. Right. If we look back in history, we know that this was the reason the recording tools failed in the first place. It was because the recorded based on parameters by default, unstable first recording tools went for screen coordinates, right, as we all know, 40 years ago. And that was never stable because if you're on a different machine or even a different day on your own machine would work anymore. So Testim is doing a great job at doing this and leveraging effort. And I think that no matter what automation tool you are these days, you need to incorporate some of that to have more stable allocators or identifiers. The other thing that future automation solutions have to do, in my opinion, is to help with giving users more guidance as they're producing their automation artifacts and Testim is yet another great example of doing this because Testim is actually looking at, okay, I saw two different users creating a recording, a flow of the application and Testim then figures out, well, parts of that flow are actually the same. So it's automatically merging those automated sequences for the users, making them reusable, making them a single source of truth by detecting that pattern of similarity. So great solution for that problem. And we see that that's definitely driving the market forward in that direction. And similarly, we should and have to apply the same pattern for mobile-based solutions. So when we have issues with object detection and locators, we should use probability and redundant locators where we make decisions based on that .... to figure out what's the best one in the situation and therefore stabilize the test. And the same goes for .... in the actual test flows. So I think this is really where we need to be. But then even take it further by making even more suggestions to users on how they could assemble a better, more suitable test case going forward. This is really where the power lies.
[00:17:36] Joe Colantonio I love that bubbling up insights to say, hey, just so you know, 90% of these do something. I love that.
[00:17:41] Matthias Rapp Exactly 90% of the market or 90% of your peers are doing it that way. Why are you doing it right? That would be most ideal. Exactly.
[00:17:49] Joe Colantonio How about analysis? A lot of times I see people spending all their time looking at test results and going, yeah, and then the next run off, and then it's too late to even debug. I would think machine learning would be perfect for this. Do either these solutions, did the Tasco or Testim have something to help with the analysis of maybe if you had a lot of failures?
[00:18:07] Matthias Rapp So yes. So one of the ways that Testim helps with that is we kind of aggregate like failures. So if you are seeing kind of recurring failures, they will suggest to say, hey, this you're not finding this particular element. You go fix that element, then your tests start to work again, all of them. And so it does kind of help you triage a little bit those results. We also will suggest what the root cause is of a particular test failure. So if there's a failure that we've seen before or we believe we know what that is, we suggest that failure, and then you can tag those failures. And that gives you that history of where are my failures came from? Are they coming from bugs in the app or are they coming from, flaky locators or are they coming from some environmental issue? If it's something like an environmental issue or a network issue, then perhaps you can fix that systemically in your test environment and then all your tests start to improve. So there are some things that we think can facilitate that continuous learning and improvement through analytics.
[00:19:06] Joe Colantonio Yes. So we mentioned Tosca model-based testing is a lot more record-based. Do they ever talk to one another? Would you ever use them the same flow? I mean, eventually, I'm sure once you would be they'll be integrated. But I'm just not that's not just a prediction. I don't know nothing. But any thoughts on. I guess the bigger question is if people don't have these tools if someone has a model-based testing tool and someone has a record-based testing tool, would they ever need to talk to one another or would they ever help each other out?
[00:19:31] Matthias Rapp Joe, my comment would be that they certainly should because I think the context is a bit different. As we've pointed out before when you would use one or the other and where they come together is actually when the scope increases, a scope increases basically for where you used the record-based tool. So any time I would say when your user of the record-based tool and then suddenly you want to reuse that test artifact, test case, it took the test case in a larger context. So for example, .... point from earlier, you have a custom-built web application, but that application is integrated with an SAP system, and it's for a system and maybe some other systems downstream or another custom-built application. And you're starting to end up with that process flow. This is when you when these tools actually would come together, right? Because you would want to use for the two custom-built applications in that flow, you would probably want to use the record-based test cases, but they would feed into the other test cases that are taking care of the rest of the workflow. And there needs to be an orchestrator of this whole thing, and this is how we currently think about this. So this is where model-based also becomes potential orchestrator of the individual bits and pieces where you where you have used a code-based.
[00:20:46] Shawn Jaques The thing. I would just add to that is that there are a lot of organizations out there that're using some sort of test management tool and that spreads not just in the UI tests but across kind of all the different kinds of testing that you're deciding to use. And you certainly see both these tools fitting into a test management strategy that would say we're going to use this tool for this part of the testing and this other tool for the other part and kind of coordinated all at that level.
[00:21:12] Joe Colantonio So I was actually at Starwest a few months ago and I know Tricentis was there and there was some rumors, like little whispers about that you're all working on a new model-based type of record-based automation tool, and if we're able to talk about it here, but it was some sort of SAS like model-based testing solution. Anything you can reveal here about what that is or what that's going to be all about and when people can try it?
[00:21:34] Matthias Rapp We can. Just for you, obviously, but for your audience. This new product is actually at this point an open beta. So anybody and try it out that's curious. So it's not a full on secret at this point anymore. The name is Tricentis test automation and it's a brand new SAS platform that is bringing model-based test automation to the cloud and we're very excited about it.
[00:22:02] Joe Colantonio The cloud, everyone talks about the cloud. What is the benefit of this, of having something in the cloud, if people like, it's just another tool? Why is this beneficial?
[00:22:10] Matthias Rapp So it's a very good question. So it's beneficial because one of the main reasons that customers have basically asked us, look, I love your model-based solutions. They provide tremendous value to me, right? I'm operating a central QA team responsible for many different applications, but I really need a lightweight solution that I can manage. Where I get the benefit of the cloud and the benefit of the cloud for most of our customers is that they can scale their executions. Right. That's really what everybody has in mind. Get infinite cloud resources, right? And you can run as many tests in parallel as you want. You don't necessarily have to operate a test lab anymore, which you do if you have something not in the cloud. Right. So those are all reasons for us to completely re-imagine model-based test automation and leverage cloud as actually an advantage in providing that benefit to our users.
[00:23:01] Joe Colantonio I don't think a lot of people think about scale when they're starting off. So if you start with a cloud-based solution, it's something, just another thing you don't have to worry about because it's kind of built-in almost for you from the start. Sounds like.
[00:23:13] Matthias Rapp Exactly.
[00:23:13] Shawn Jaques It also helps those remote employees or the different teams in different locations that all can contribute to testing and they can share objects in a central location this global.
[00:23:23] Joe Colantonio And this may sound silly, you won't imagine how many people I've talked to that have problems just installing software sometimes, like getting Selenium not to say the run with Selenium but against Selenium on the machine locally for a developer and up and running. It's like a big tutorial I guess what the cloud-based solutions just log in. There you go?
[00:23:40] Matthias Rapp Right. That's the ultimate goal, Joe. Any cloud product.
[00:23:45] Shawn Jaques Yeah. Like all the cloud products have some sort of software that kind of ends up on a machine. It could be simple as a chrome extension or a little plug-in or small software, but yeah, once you've got that installed, then you're just off and running and it does make it a lot easier. And the other thing is all those updates that you would have if you had an on-prem solution. You can just avoid those because they just get pushed out to the SAS automatically.
[00:24:09] Joe Colantonio That's a big feature for sure. Absolutely. So we talked a lot about model-based testing and record-based testing. I think a lot of people may be doing just traditional automation testing or automation scripts. Is there any advice you would give to someone if they want to start moving towards a model-based type of approach or record-based approach, they need to switch up tools. Or is it just a different kind of mindset?
[00:24:29] Matthias Rapp I think the best way to get started with this, it's not a different mindset. I think a lot of even any experienced test automation practitioner probably knows about the concepts of keyword-driven framework or a data-driven framework or things like that. Right. And even if you Selenium at some point you run into these thoughts because you have to really be mindful how you build a well-structured project in your code base that you can maintain and manage. So, you know, if you have that mindset, you basically get an off the shelf solution that takes care of these concerns of this good automation architecture in your model-based product. Plus you get the benefit of abstracting it so that it's readable for business users and you get a sense for what is actually being tested without having to debug or read through the code. So I think if you come with that mindset, this is really where a model-based feels somewhat familiar to you if you're completely new. I think the test automation you're probably better off with than have no prior experience. You're probably better off getting started with a record-based solution and just require a simple use case, see how it goes, you can scan from there. Right. And that's basically how you can keep going.
[00:25:36] Shawn Jaques Yeah. I think, if you are using a framework like Selenium for sure, you have that kind of experience of starting it at one point in an application and going writing scripts that basically get you to another point in the application so that it's fairly similar when you come to a record-based tool. You're starting at one place in the application, you're clicking through to another place, and you're creating your assertions. And that's how you get a test case. Model-based testing does require a little bit more. I would call it crafting the test case or maybe thoughtful planning for how you want this test case because you're going to think of what are the objects within the UI that I'm going to interact with? You say, okay, well, I'm going to capture all those objects, and then you're going to start putting them together in a palette like the log in scenario. I've said, you scanned all those little objects and now you start to put them together in an editor that would represent how somebody would go through that web case. So it does take a little bit more thoughtful consideration, I would say, when you're going through that part of it and a little bit more of understanding the tools so you understand how these individual piece parts or modules then get added together to create a test case. So it does require a little bit more planning. And but by the same token, when you do come up with that use case, you do get the benefit of the reusability and the modularity that you can then use across different applications. And it really does help kind of set you up for a longer-term success.
[00:27:07] Joe Colantonio Good advice. Okay, guys, before we go, is there one piece of actual advice you can give to someone to help them with their automation testing efforts? And what's the best way to find contact you or learn all about Tricentis, awesome test automation projects, and solutions?
[00:27:20] Matthias Rapp I guess I give you two points of advice. Number one, automation is a really exciting discipline. I hope everybody knows that because listening to your podcast shows so, keep going. It's a really exciting discipline and stretched boundaries of what's possible and what's doable. And the second one is that, stay tuned of what Tricentis is doing because we are really trying to give you guys the right tools to be successful with test automation. And I hope you guys are going to check out Tricentis test automation once it's available.
[00:27:47] Shawn Jaques Yeah, I'll just add that I think that there's a tremendous amount of benefit from test automation. And if you're currently manual testing, whether you're going to jump into record-based testing or into model-based testing. I think, get out there and give it a try. And within Testim, everything Tricentis, there are ways that you can try that test product in just a matter of minutes. You can be into the product. The same thing with Tosca and soon with our new product, you'll be able to try them out very quickly. And in fact, if you guys are interested, anybody is interested in trying out the beta for this new technology that we're working on. It's actually out there on the website. You can get into the beta really quickly. Just go to Tricentis.com/ttabeta and that will get you into the beta.
[00:28:33] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head on over to testguildcom.kinsta.cloud/a427. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:29:18] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you. Head on over to testguild.info And let's make it happen.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.