About This Episode:
In this session, host Joe Colantonio sits down with Leonardo Lanni, a globally recognized quality engineer and founder of QA Roots, to explore a cool testing technique: mutation testing and reverse mutation testing.
See if your apps are visual perfect with App Percy: https://testguild.me/apppercy
Leonardo breaks down the concepts behind mutation testing, which is an innovative approach to verifying the effectiveness of your tests by intentionally introducing code changes—and introduces the idea of reverse mutation testing, a new technique for ensuring your tests themselves are up to the task.
They also discuss real-world QA challenges, practical tooling, and the future of automated testing in an AI-driven world.
Whether you’re a seasoned engineer or just getting started in QA, this conversation is packed with actionable insights and fresh ways to look at your automation strategy. Listen up!
Exclusive Sponsored By App Percy by BrowserStack
Let’s face it—mobile app testing is broken in more ways than one. QA teams and developers are drowning in manual visual reviews, flaky emulator tests, and false positives that waste hours. Testing across real-world devices? Often out of reach. And while functional tests may pass, you still end up shipping misaligned UIs that break trust with users.
That’s where App Percy by BrowserStack steps in.
App Percy is a purpose-built visual testing solution for native mobile apps, designed to fix the pain points that frustrate modern teams. With a single line of code, you can run visual tests on 35,000+ real devices—not simulators—capturing only meaningful UI changes, thanks to App Percy’s Visual AI.
It handles dynamic content, layout changes, and anti-aliasing noise so your team can review confidently and move fast. Plus, it works seamlessly with your CI/CD, Git, and testing frameworks—whether you're on Espresso, XCUITest, or Appium.
For teams aiming to ship aesthetic, stable mobile apps faster, App Percy delivers where others can’t. It cuts review time, scales effortlessly, so you can release your mobile apps with confidence.
Visual Testing isn’t optional. It’s THE experience. And App Percy makes sure you nail it, every time.
App Percy’s base plan is absolutely free to use, so do sign up and enhance your end-user app experience drastically with AI-powered visual testing.
Support the show and check it out for yourself now: https://testguild.me/apppercy
About Leonardo Lanni
Leonardo Lanni is a globally recognized Quality Engineering expert with over 18 years of international IT experience across industries including eHealth, financial payments, eCommerce, mobility, and startups. He has held senior leadership roles such as Head of QA, IT Leader, and QA Lead, and has a proven track record of recruiting, coaching, and transforming high-performing QA teams. A strong advocate for early testing (shift left), lean QA, testability, and observability, Leonardo believes quality is a mindset, not just a process. His expertise spans software quality engineering, automation, and agile methodologies, and he’s deeply passionate about continuous integration and process improvement. Leonardo actively mentors startups to scale quality from day one and contributes to the global QA community through articles, podcasts, GitHub projects, and coaching. He’s fluent in English and Italian, with working knowledge of German and Spanish, and holds over 10 U.S.-registered tech patents. With professional experience across Italy, the U.S., Germany, and Spain, he’s also committed to growing QA talent in emerging regions like Morocco and Africa—recently participating in GITEX Africa to support tech entrepreneurship. As the founder of QA Roots, Leonardo collaborates with international companies throughout the QA lifecycle.
Connect with Leonardo Lanni
-
- Company: www.qa-roots
- LinkedIn: www.leonardolanni
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:00] Joe Colantonio Hey, what is mutation testing? What's reverse mutation testing. That's what we're talking all about today with our featured expert Leonardo. If you don't know, Leonardo is a globally recognized quality engineer. He has expertise over 18 years of experience from e-help to e-commerce, fintech to mobility. He has built and transformed QA teams across startups and enterprises in the U.S., Europe, and beyond. Also, he is the founder of QA Roots. Really excited to have him on the show. I caught this LinkedIn post about reverse mutation testing. We never had anything on the show about this, so really excited to have him on. You don't want to miss it. Check it out.
[00:00:38] Joe Colantonio Hey, let's face it. Mobile app testing is broken in more ways than one. QA teams and developers are drowning in manual visual reviews, flaky emulator tests, and false positives that waste hours. Testing across real world devices off and out of reach. And all functional tests may pass. You still end up shipping misaligned UIs that break trust with your users. And that's where App Percy by BrowserStack steps in. App Percy is a purpose built visual testing solution for mobile apps designed to fix the pain points that frustrate modern teams. With a single line of code, you can run visual tests on over 35,000 real devices, no simulators, capturing only meaningful UI changes. Thanks to App Percy's visual AI. It handles dynamic content, layout changes, and anti-aliasing noise to your team can review confidently and move fast. Plus, it works seamlessly with your CI/CD, Git, and testing frameworks, whether you're on Espresso, XUI test or Appium. For teams aiming to ship esthetic, stable mobile apps faster, App Percy delivers where others can't. It cuts review time, scales effortlessly. So you can release your mobile apps with confidence. Visual testing is an optional. It's the experience and App Percy, makes sure you nail it every time. And even better, App Percy's base plan is absolutely free to use to try for yourself and enhance your end user experience drastically with AI powered visual testing. And you can check it out using that special link down below.
[00:02:20] Hey, Leonardo, welcome to The Guild
[00:02:25] Leonardo Lanni Thank you very much, Joe. I know you should all. I'm very honored to be in The Guild. So happy to be here. Thank you for hosting me.
[00:02:33] Joe Colantonio Great to have you. I guess before we get into it, I always am curious to know, how did you get into testing?
[00:02:40] Leonardo Lanni It's a very good question actually. It goes back in my young days at IBM. I was a software developer at IBM, what happened, long story short, I think it was around 2009. Performance and scalability testing was the new hot topic back in the days. So IBM was building up a team of people that would do this. Automation basically consisted of testing systems where you needed to create a lot of agents. Back in the days, there were not many tools, so the idea was to build up a team of experts that would start doing performance and scalability testing. And the idea is to pick people from developers because those guys probably have a little bit of know how about building scripts and programming and so on and so forth. There was a possibility of this pilot and I decided to join it because I thought it was a very cool idea to start doing something that was a little bit edgy on the frontier. And that was basically the beginning of my long term adventure and relationship with, let's say, testing.
[00:03:57] Joe Colantonio Love it. So I started off as a performance testing as well. When did you start getting more into like higher level QA topics and why? I guess you started a company that actually helps people with this as well, why pivot that way as well?
[00:04:09] Leonardo Lanni Well, there was, let's say, a chain of elements. Once I move into performance testing, that really opens a new world for me. My perspective was more into the building things as a developer, but then that really opens an amazing world about quality and quality engineering. And the more I was working in that, the more it taught me how to do it. There is so much room to help companies and to make sure that the products, the services that we all love as customers, as clients, they get delivered on the proper standard. Throughout my journey, I step-by-step, I moved from starting from an individual contributor in a team and helping with the performance testing then, slowly, slowly, I started getting roles where I was the responsible, the head of QA or the QA lead of the person that would work on the quality engineering on the strategy. Most of the times I was collaborating with head of engineering, CTOs. And I really like that because beside the test automation, which is clearly an important part of the process, there is a lot of strategy and a lot of consideration to be done in quality engineering. And I really like also to try and keep my standpoint as the final client. The final goal is eventually delivering quality and delivering business value. Throughout different roles and different companies, startups, and in other countries, I ended up having more and more responsibilities until I thought probably for me it's the time to do this as my main mission, my main job to help companies deliver great quality. And that's what I'm doing now. And I'm very happy with this very motivating, rewarding, and helping companies deliver great products, I think it's great.
[00:06:04] Joe Colantonio Love it. Love it. I guess the answers probably depends by, I don't know. You work all over the world. You work with all kinds of different companies. There's any commonalities you see people struggling with testing or is it just different depending on the vertical they're in or the country they're in?
[00:06:22] Leonardo Lanni I would say both answers are correct. There is a common basis into doing testing. One thing that I see in a lot of projects and companies, they all try to follow the testing pyramid, doing a lot on unit tests, a little bit of integration, and as you go on the top, just a few end-to-end and exploratory testing. There are some generic guidelines that I, more or less, I can see almost in all the projects. But that's, I would say, the boundary in what is common. Then it's quite interesting that every single company does quality engineering in a completely different way. Their company, they really make an effort to deliver a great job. They create a book of standard. They try to make their own way to deliver great application. And they invest a lot into creating central excellences or if you need to build an app, don't start from scratch, but rather download something a little bit, I would say, battering cloud that contains a lot of things that are done under the quality of the company. And then on the other side of the spectrum, you've got also companies where there is basically nothing. It's just the quality relies on the ability of developers and knowledge on developers. But in these companies, that's where the magic happens and the fun. Try and build something that helps deliver great quality. It really varies depending on the company, but probably the testing pyramid is something that everybody more or less agrees on.
[00:07:57] Joe Colantonio Awesome, awesome. All right, so as I mentioned, the meat of the show I wanna cover on reverse mutation testing. I saw this on LinkedIn on my feed and I covered it in my news show. And I think it was one of the top clicked on news items. I guess before we get into reverse mutation testing, it may be a term that people aren't familiar with even mutation testing, so let's define what is mutation testing and then we'll dive into what is reverse mutation testing.
[00:08:24] Leonardo Lanni Yeah, yeah, that's the right way. Mutation testing, for me, the first time I heard that, that was a great idea. Let's simplify with an example. You have your code and you have some, let's say, unit tests, which is covering the code that you're building. The main problem is that, or the challenge I would say is that, how do we make sure that those unit tests are really effective? A lot of times people talk about coverage as a good metric to evaluate unit testing, but covering all the code is definitely a good starting point. But is that really enough? My personal opinion is no, because you could write just as a matter of proving that is not enough dumb test cases that just asserting that true equals true or absurding the obvious. Test coverage, covering all your code through unit test is a great idea, but it's not enough. There is a need for me to understand are those test cases built in a way that if there is a bug due to some code modifications, they will be able to catch the bugs. The need is measuring and verifying the quality of those test cases. Are they valid? They really test what is important, what then whatever that fails could break things and could fail to deliver business value so they could put the entire application under risk. The mutation test comes in a very cool idea. So the idea would be, what if we have a programmatic, let's say automated way to modify our target test code? Imagine there is a tool that takes an input, the code that we're testing, that we are producing, our application code. Modifies it, mutates it, if we want to use the right terminology. What does that mean? It basically means modifying, for instance, the condition in an if-then flow. If then flow is executed with a major sign, you just flip it and then you create a modified, a mutated version of this code and then, you execute all your test cases against this mutated version of your code. So why would you do something that's, the first glance you may say a little bit, that's a way, why would they do something like that? Well, the idea is very smart actually. If you run a test case against a modified version of your code, what would be the expectation? The expectation would be that your test is failing because by modifying the code, and by having the test case still testing the original version, you would expect this mutation to make the test cases fail. Now, this is something that I would say applies in general. We could always find cases where this doesn't apply, but in general, mutating the code and running the automated unit tests against this code should make the tests fail. The mutation testing tools, all they do is they take your code, they apply some mutations and mutating the code could be done in so many different ways. So also the challenge is finding the right way to mutate the code. And this is also a computational challenge because if you imagine there could be billion of ways of mutating the code, you want to pick the right ones because at the end of the day, this mutation testing is something that you normally would add on top of your testing pipeline. And this something that must have a time budget. Mutation testing would take 20 more hours, it's not worth it because nobody can wait extra 20 hours to get test results. But imagine this can be done in 5 minutes, and at the end of this automated activity, you have a report that would tell you, hey, hold on, because there are a couple of test cases that are not failing against this mutated code. All you have to do is say, hmm, that sounds a little bit weird, so you just go and analyze those test cases manually. And you try and understand why they didn't fail. Now, there are two options. The option A, is in general that we can say there is a reason why they're not failing and then life is good. Then you have validated your test cases to be good. But the worst case scenario is, ha ha, you may have found a test case that it's not doing a right job. So basically it's asserting something that it is meaningless, there could be an error, there could be something wrong in a way that if you mutate the code, your test case is still passing. And honestly, Joe, this is the worst situation ever because we all rely on our test cases. We all assume that whoever wrote those test cases, and they could have been written many, many years ago. They were reviewed, they're correct, and life is good. And what is even worse, technical debt is a really big anomaly in companies. Do we really have the time to go and review the test cases? And do we really want to review test cases which are passing? And this goes a little bit into psychology. When I first, if I see green stuff on my screen, happy days, you feel like popping the champagne and celebrating. The little thing that says, hey, are you sure those test cases are right? Maybe they're passing, but they're not doing the right thing. I may say, It would be great to validate that, but it's effort, right? But if I had a tool that would do that automatically and would point out to me a couple of test cases of maybe hundreds that could be suspicious, happy days, it's well-spent time because if you find the wrong test cases, you fix them, then you have increased the quality of your test automation and you sleep much better in the night knowing that your test cases are really effective, they're delivering quality.
[00:14:35] Joe Colantonio There's a few things there. I guess everyone is always like, oh, we need a green build. We need a Green build. You're right. If it's always green. And like, how do you know it's really passing? I don't know if that makes sense. Like, yeah, it's green, but is it really green? Is that the problem? I guess that's the first thing. You want to build trust in your tests. When they fail, people pay attention, but you can't really trust them if you actually never saw them fail, I guess is the main issue.
[00:15:10] Leonardo Lanni This is exactly this is one of my actually mantra that is to do not ever trust a test case that you never seen failing at some point in the past because it's too good you make a very good point. We want green pipelines, we want everything working fine but at the end of the day, those pipelines was test automation they are our guardians they make sure that if there are they find them and they fail. In an ideal world, a pipeline fails upon problems. In reality, you can have cases in which a pipeline fails because the test case is not maintained and that can affect the reliability and the fact that the teams are relying on those test cases because if they start failing not over a problem but over a maintenance issue, then over time, teams start to, ah, that's rare, but it's not really reliable. They don't even go and look if there is a real problem. They become useless. And this is something that maintenance had to prevent the situation. On the other hand, and this is actually even very such a coronary case that if it's always green, but we also want to rely that green means green, green means the test cases are passed. And we normally don't go and check if that's something showing green it's really green and that's actually the catch. And by not doing that which I think you know it's the human nature these things are if the testing is telling us everything is good happy days let's just move on but companies lose millions by relying on test cases they're not doing the right job and mutation testing and reverse mutation testing are relatively quick wins that can help diagnostic aid do a checkup on your test cases, find the wrong ones and fix them. And it's much better if we find problems rather than the clients find problems due to leaky test cases, let's call them like that.
[00:17:16] Joe Colantonio It's pretty dangerous because you're giving someone a false sense of security if you can't honestly say, Yes, it's green. But is it really doing what we think it's doing? It sounds like. This is a one and done activity? Like, would you just write a suite of mutation test and you run it once and say, okay, I've seen a fail, so don't need to worry about that again? Or is this something you run in the pipeline ever so often? Like, how does it work?
[00:17:48] Leonardo Lanni It's a very good question. In reality, I would say that my approach, I mean, it's great to integrate the mutation testing in a pipeline, but maybe it's a little bit over engineered, unless you run mutation testing only on new test cases. Because once you validated your old test cases to fail up on mutations, and therefore they're telling you they're working fine, unless you modify those test cases, and then, again, you should ask yourself, are the new versions of the test cases working? This is something that you can delegate to new test cases. It's great to have mutation testing in a pipeline, given that you tune the mutation testing so that it doesn't take too long to mutate the test cases in the right way. There are tools that do mutation testing offering you the right way. And also make sure they take care to run mutation testing only on the new test cases, because once you validate the old one, as long as you don't touch them anymore, you can rely on them. But I would say that could be the 2B ideal situation. If a company starts also to say, hey, let's from time to time run mutation testing even out of the pipeline just to verify our test cases. Sort of a manual or semi-automated process. I would say it's a great starting point because If you validate your test cases from time to time and they deliver quality, they're testing the right thing, even if it's not in the pipeline, it's better than not doing that at all, I would say.
[00:19:23] Joe Colantonio 100%. What's reverse mutation testing? Are these terms interchangeable?
[00:19:27] Leonardo Lanni Yeah, so basically, reverse mutation testing, it's something that is sort of an evolutionary, I would say it's a complementary technique to mutation testing. You can apply both of them. The idea that I had comes again from this mantra, do not ever trust a test case that you've never seen failing in the past. I think most of QA people, when you're in a stage that you're writing automation unit, but it could be any type of testing. As you're designing, you're implementing those test cases, you want to make sure that they're working fine. You run them and they pass. So life is good. But how do you really know that they are working fine? The easiest way possible that I may think of is just programmatically make them fail. A very simple example, you are testing that a module that is summing two numbers If it takes 2 and 5 as an input, you're writing a very simple test case that the output should be 7. And this test case should pass as long as the code is really working. Now, to be 100% sure that this test is working, what I can do is very simple. Let me mutate the output from 7 to 3 or say anything that would be wrong. And now if I run a test case that is checking that 2 plus 5 equals 3, I'm expecting the test to fail. If this happens happy day, it's probably showing me that the test case is well designed. The problem is if the test case doesn't fail, because I will say, hey, hold on, this test case keeps on passing even if I manipulate it temporarily by breaking it. And why is it passing? Now, obviously, on a very simple example like this one, it is not going to happen. But there are complicated test cases where You're manipulating objects, you're checking API responses, where the chances of writing a test case that it looks like is working, but actually as a mistake at some point, it's quite consistent and a great way is just modify the test, break it, run it, and expect the test to fail. But this is an activity that it's relatively fast, but now scale it up to all your test cases. It's great when it becomes part of your methodology as writing test cases. But again, what happens in every test cases that were built in the past by some other people, how do you trust them? Therefore, I thought I can put all those pieces together. And I thought, hey, why if instead of mutating the target code that we're testing and this is the mutation testing, why don't we mutate the test case? Exactly automating what I would do as a tester manually. I'm breaking my test case in a purpose way, so really knowing what I'm doing. But instead of doing myself, I let this reverse mutation testing tool do that. And again, when I run my test automation, the mutated version against the not mutated code, I would expect the test case to fail because I broke the test case by mutating them and I'm expecting them to fail. Having the tool that does all of these automatically comes to you with a report finding the little nasty test cases that are still passing upon the mutation, they help you identify problematic test cases, and we end up in the same scenario as with the classic mutation testing. There may be reasons why they don't fail. And then life is good, happy days. There is no action item to do but you may identify test cases that oops those test cases are by analyzing them you realize it was an error. You fix the error, you run again your reverse mutation testing and this time they fail upon the mutation and then you know that all your test cases have reliable. Now having a tool that delivers that in minutes and plus you extra effort just to modify. Imagine, hundreds of test cases, a couple of them are not failing on the mutation of the testing. You fix them and now you run the reverse mutation again, now they're all passing, and then you know that your test cases are reliable. That's basically the idea of reverse mutation tests. Just flip in the angle, don't mutate the target code, do mutate test cases.
[00:24:04] Joe Colantonio All right, so this might be a real dumb question. Is it always the test case that's broken, or does it actually find real issues with the code? Does that make sense? This is just an activity to test your test code, not actually to test the code that you're testing.
[00:24:19] Leonardo Lanni It's a very good question actually. The thing is that when you do reverse mutation testing or also mutation testing, the idle state is that you're starting in a situation where everything is working so your test cases are passing. So let's go into the reverse mutation testing. If you start in a situation that your test case is passing, mutating them should not pass. So ideally it means your code is correct. The scenario of those test cases instead failed because the new version of the code changed and this broke the test cases. This could always happen but I think there are two orthogonal things. Here I would say the focus is more to validate the quality of your test cases in the meaning of catching in an automated way faulty test cases because a faulty case delivers nothing. And of course, by running them you can always fall in the situation that they spot, they identify a real problem and happy days, because at the end of the day this is what test cases should do, but I see them as two orthogonal things.
[00:25:32] Joe Colantonio Gotcha! How's this different than like TDD, but you're using TDD like, but BDD, I used to do TDD with the BDD before I implemented each given when I've made sure to fail before I implement it the next team, is this the same or is that different?
[00:25:48] Leonardo Lanni Well, TDD, as you say, test-driven development is a methodology where you start writing a test case before the code implementation, so the test fails, and then all you have to do is deliver the minimum code that makes that test case pass. This is TDD. Reverse mutation testing, something that you can put on top of TDD in the way, in the meaning of, okay, now, I've written my test case through the TDD methodology, it's failing. Now, I wrote the code that makes the test case pass, but am I sure that the test cases that I wrote, since I wrote a test case manually, am I really sure that this test case is really correct and there is no problem? Hey, let me apply the reverse mutation testing. Let me break the test and make sure that upon breaking the test, it will not pass. When this happens, I know that my test is likely correct. That doesn't happen. A, I may have written accidentally a test case that always passes, so delivers no value. The short answer is that reverse mutation testing can be done on top of TDD and they're complementary and both deliver value. TDD has a methodology and reverse mutation testing has a way to validate your TDD generated test cases, I would say.
[00:27:06] Joe Colantonio All right, so you mentioned tooling a few times, and I forgot to follow up. Is there a tool for this? What is the tool for this? I thought this would be more of a manual, just you fat fingering things to try to make them fail. What's the tool you use for this?
[00:27:20] Leonardo Lanni I clearly did a little bit of search about reverse mutation testing and tooling, and I found nothing. Basically, the best practice of breaking your test cases in the process of developing that. I actually decided to write myself what I would call a POC or proof of concept of a reverse mutation-testing tool, just to create something that can be easily used but obviously the idea would be let's create something which is an MVP or really like showing the concept. And then I created one that you can find on my GitHub. It's freely downloadable. You can check it out. Actually, I had already a pull request by a nice person from India that already found out some improvements and I'm very happy because eventually that's the goal or even get a project that make them available to the community, and the dream is having the community contribute to them. I created that. It's quite basic, because basically what it does, in this example, it takes existing test cases, it creates a version, it writes it into the file system and the mutated version. It creates this mutated versions and you run the mutated test cases against some toy code that I've created, and he creates a report identifying the test cases that don't fail over the mutation, so the suspicious ones. Of course, real mutation testing tools, they don't create physical copies of the code in this case, but they would rather use everything in line in memory. There is a gap to implement a much better version of reverse mutation testing, where the mutations of the test cases can be created at runtime in memory and run them. But the idea is that let's build some foundation and then if the community sees some interest, we can build a better version and optimize and start piloting in some projects, I would say.
[00:29:34] Joe Colantonio I know you're a big proponent of a lean QA and shift left practices. Does this slow things down? Like, how do you know what you should-should everything be mutated testing, used for mutated testing, a reverse mutation testing like and how much data do you need to validate whether or not it actually is passing that at that point?
[00:29:56] Leonardo Lanni It's a very good question because at the end of the day, is it worth doing mutation testing or reverse mutation testing? My answer is yes, as long as you have a time budget, as long this doesn't affect the pipeline execution. If I need 20 hours to mutate my code or my test cases, it's clearly not worth it. But again, if we create a tool which is optimized and very fast into mutating the test cases, and we do a proper mutation, you can mutate the test case in different ways. You can change the assertion, you can change the result, the condition, and so on and so forth. By implementing, and this clearly has a room to discover how this can be done, but by finding the right way of mutating test cases and implementing a tool which is optimized, we can make sure that the time budget for running reverse mutation testing is under control. So personally, if somebody tells me, hey, would you pay 5 more minutes in your pipeline, but the return on investment is you identify potentially broken test cases, the next time fix them and life is good. I would say, definitely, because if this prevents t he situation of a green pipeline, life is good, but the test is broken. This prevents the possibility of a bug to escape and to go to production and eventually being found by clients. And we all know that through bugs, companies lose reputation. The damage can potentially lose, make the company lose million of dollars of euros. My answer is time budget, time limit, execution of mutation testing and reverse mutation testing, I see a lot of return on investment.
[00:31:55] Joe Colantonio All right. I'm surprised you went this far without talking about AI, but obviously everyone is talking about AI. Since you've been around for a bit, has AI really changed QA and testing? And where do you see the role of testers going in the future? Because a lot of people are saying it's going to diminish or some saying it is going to grow. I hear all these companies, founders of AI saying we're all doomed. Where do you stand?
[00:32:18] Leonardo Lanni I'd say a couple of points. I think embracing AI is great, as long as you don't deliver everything to AI, but ask AI to, you can divide some tasks and ask AI to implement some automation for you, as long you understand it, you review it, you own it, and you even ask AI to explain how you implement a certain solution when you can clearly see. That's something I would not be able to do that, but definitely embrace it because it increases your productivity. I actually think, a lot of companies are probably thinking that everything can be tested through AI. But at the end of the day, I think you still need somebody to validate it. I see a shift in the role. You can delegate to AI a lot or work, but you have to be in control that it doesn't hallucinate and it delivers a great job. The other side of the medal is like I see still a challenge on how to test AI-based software, for instance, how to do test non-deterministic systems. How am I testing like a chatbot? You need testing like when I ask a question, the answer should match my expected one. Clearly, it doesn't work when we move from determinism into the world of semantics where a question can be answered in many different ways with the same semantics. That's actually where I still see a challenge of human beings needing to find a way to automate non-deterministic system. I still think that real humans can and must deliver a lot of work in the age of AI. It's just as every big revolution, there is just a shift of the world, but that we definitely need to continue working on that. That's my take.
[00:34:14] Joe Colantonio I forgot about one last question. A little plug for your company QA Roots. What do you all do if someone's listening? Who's a good fit, maybe that they can contact you to maybe get some assistance from you?
[00:34:25] Leonardo Lanni Yeah, absolutely. I create the QA Roots with the metaphor of establishing quality assurance and engineering as the roots, the foundation of your company to deliver an amazing product, the tree. The philosophy behind the company is really to help companies deliver great quality through best practices and quality engineering. I'm super happy to discuss with any company in the need to create quality engineering or even to fine tune their existing practices. I am absolutely happy to support them through my knowledge and feel free to reach out to me. And sometimes quick fixes, quick wins can deliver incredible quality. Doesn't need to be necessarily things that last for a long time. But even just starting in a conversation, I'm happy to help deliver, to have companies deliver even better products.
[00:35:31] Joe Colantonio All right, Leonardo, before we go, is there one piece of actual advice you can give to someone to help them with their automation testing efforts? And what's the best way to find or contact you?
[00:35:41] Leonardo Lanni Definitely, I think my suggestion for testing is, those were the days where you can do black box testing and remain at the surface, try to shift left also in the testing, try to be adventurous, understand the core that you're testing, even if you end up doing just end-to-end tests, talk with the developers, create, establish a good relationship with them, analyze also the way, the unit tests of reading, considering normally they're done by developers. Be involved into the code reviews of pull requests, merge requests, and analyze the way they're testing. Provide your suggestions. Don't be afraid of the code. You can even create a little project on your server where you start writing some code and understand also the challenges from the development standpoint, but really be curious and get as close as possible to the code, this is probably the best way to become an excellent testers. To answer your other question, the best way probably to reach out to me is either through LinkedIn. I am very happy when people contact me and I try to reply to whoever writes to me or you can write to info@qa-roots.com and clearly I'm absolutely happy to reply and I monitor the email every day. So those are the best ways to reach out to me.
[00:37:04] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a548. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:37:39] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:38:23] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.