Podcast

191: Model-based Testing & DevOps with Michael “Fritz” Fritzius

By Test Guild
  • Share:
Join the Guild for FREE
Michael Fritzius


In this episode we’ll be Test Talking with Michael “Fritz” Fritzius, the founder of Arch DevOps, about model-based testing. Discover what model-based testing is, how it works, when to use it and who's a good candidate for it.

About Fritz



Fritz is a tester, business owner, father, engineer, and technologist. President of Arch DevOps, he has been in IT since 2006 and around computers his whole life.He enjoys solving extremely hard problems, reducing the complexity of testing large systems into simple automated test frameworks for a variety of clients, in a variety of sectors.He regularly experiments with smashing cutting-edge technologies together just for the lulz, which often results in specialized solutions for his clients. Along with this, he offers targeted training to level up the people around him and does show-and-tells of various technologies for his teammates.He also blogs and writes articles, and has been featured in TechBeacon, testproject.io, the IIBA newsletter, TEST magazine, and his own blog at archdevops.com/blog. He has also spoken at a Full Stack user group and at an Agile meetup.When not slinging code, speaking or writing, he and his wife homeschool their children, teaching them all sorts of things relevant to their interests in the areas of math, art, science, exercise and social skills. Fritz lives with his wife Charlotte in St. Louis, Missouri, USA, in an estrogen filled house with their four daughters, Lorica, Amelia, Serena and Karisa, and a fish named Marcia.

Quotes & Insights from this Test Talk

  • What is model-based testing. Okay well it on the surface it's going to sound a lot more complex than what it actually is. But when you have a system that you're testing what people normally do is they'll they'll write individual tests like I want to run through the system I expect this output model-based testing is different in that you actually build a representation of how the whole system is supposed to work. And then you take that test data and run it through both the model and the system and you compare the outputs. And you don't necessarily have to know what the input is you don't even have to know really what the output is. This sounds weird but you're comparing the outputs from the system and the model. And then when you see differences there it means either there's a bug in the system or the model itself needs to be updated. And the advantage of this approach is that you can get that answer a lot faster.
  • The ways that I approach it I try to use tools like Cucumber to actually generate the code. The code that's generated, it's not code that a human would ever write it could be a gigantic if-else structure. Nobody in their right mind would write that. But you don't have to be a coding expert because you're working layer that you're using to build out the model is just plain old English.
  • You want to have something that looks pretty close to the data that you'd run through the system but it's kind of like Swiss cheese you can you can put some pieces in there and fill in the gaps and that's the generator side of it that's the thing that creates those pieces of data that are then going to be run through the system. And the model both the other part is the model itself. If you do you need to have something that represents how you think the whole system's going to work to treat the entire system like a giant black box and you say if I put some data in the front and I'm going to get something out but I'm also going to get an output from the system itself. So then the third piece of the puzzle is something that can intelligently compare those two outputs and tell you exactly what's wrong.
  • You know the way software looks and feels how it makes you feel when you use it as something only a human can just write. I wouldn't say that that's ever going to go away because humans are fluid creatures you know always change our minds on things. But I think what AI would most likely replace is the time-consuming process of creating automation. So I think that there are some tools are coming that are just around the corner I'm sure that can handle things like Hey I've got this codebase and I'm going to generate a bunch of automated tests that will give you a really good assessment really quick as to where the bugs are and generate all those. You know I think that manual testing exploratory testing that real critical thinking creative nature of humans is not going to go away ever. So that's not being threatened.
  • Best piece of advice leave no stone unturned try everything. Don't be afraid to do some crazy. Don't be afraid to experiment. There's no right or wrong way to do this.

Connect with Fritz

May I Ask You For a Favor?

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.

Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

SponsoredBySauceLabs

Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

267: Smart Test Execution with Eran Sher

Posted on 08/25/2019

Do you run an entire automation test for every build because you don’t ...

266: Automation Journey and TestNG with Rex Jones II

Posted on 08/18/2019

In this episode we’ll test talk with Rex Jones about his automation testing ...

265: TestProject a Community Testing Platform with Mark Kardashov

Posted on 08/11/2019

In this episode, we’ll talk to Mark Kardashov, CEO and Co-Founder of TestProject, ...