AI’s Role in Test Automation and Collaboration with Mark Creamer

By Test Guild
  • Share:
Join the Guild for FREE
Mark Creamer TestGuild Automation Feature

About This Episode:

In this episode, host Joe Colantonio sits down with Mark Creamer, President and CEO at Conformiq, to delve into the fascinating world of AI in test automation.

See how Conformiq can transform your testing process by requesting a free demo now at https://testguild.me/aidemo

They explore how Gen AI and Symbolic AI are revolutionizing the generation and optimization of test cases, making testing processes more efficient and collaborative.

Mark shares his two-decade journey in utilizing AI technologies such as natural language processing and vision recognition systems, highlighting the distinctions between older and newer AI methods. The conversation underscores how Gen AI's accessibility and user-friendliness are transforming testing. At the same time, Symbolic AI continues to provide deterministic and predictable test case generation, which is particularly valuable in regulated environments.

Tune in to hear Mark's insightful perspectives on enhancing test automation through AI, leveraging system-level models for end-to-end testing, and optimizing BDD approaches for agile development. Whether you're a seasoned tester or just getting started, this episode is packed with practical advice and thought-provoking discussions that you won't want to miss.

Exclusive Sponsor

Are you ready to jump-start your testing process? Meet Conformiq, the leader in AI-powered test design automation. With Conformiq’s automated test design process, you can generate faster and better tests without scripting. Experience seamless, scriptless automation and join the Smart AI-led testing revolution today. See how Conformiq can transform your testing process by requesting a free demo now at https://testguild.me/aidemo

About Mark Creamer

Mark Creamer Testguild Automation Feature

Mark Creamer brings an engineering background to Sales,
Marketing and Strategic partnering for a variety of software businesses.
Currently, he is the President and CEO at Conformiq which focuses on
improving the quality and efficiency of software testing. With over 40 years
of industry experience, he is focus on taking software testing to the next
level.

Connect with Mark Creamer

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:00] Mark Creamer I think Gen AI is really good at making smart people smarter and more efficient. It's not good at making novices brilliant because we've heard of Gen AI hallucinations. Well, that look probably contributed to that level of feedback you get. But Gen AI in a sense, is kind of working with somebody that gives you suggestion. And not all the people you work with are brilliant.

[00:00:30] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.

[00:01:05] Joe Colantonio Hey, today we're going to dive into the exciting world of Gen AI and AI in general in test automation. Joining us, we have expert Mark Creamer, who's going to help uncover how Gen AI streamlines test processes at the push of a button, you'll discover how Symbolic AI transforms user stories into optimized test cases achieving better functional test coverage. You're going to learn how I could boost collaboration sprint teams, and explore the benefits and challenges of AI integration and testing, plus, all other kinds of awesome insights from Mark on AI's impact in testing. If you don't know, Mark is currently the president and CEO of ConformIQ, which focuses on improving the quality and efficiency of software testing. With over 40 years of industry experience, he focuses on taking software testing to the next level. And in this episode, he provides AI insights that you probably haven't heard anywhere else. You want to listen all over the end. Don't miss it. Check it out.

[00:02:01] Joe Colantonio Before we get into it. Are you ready to jump start your testing process? Meet conform IQ, the leader in AI powered test design automation. What's ConformIQ is automated test design process? You can generate faster and better tests without scripting. So experience seamless, scriptless automation, and join the smart AI led testing revolution today. See how ConformIQ can transform your testing process by requesting a free demo now. Support the show. Head on over to testguild.me/aidemo and see it for yourself.

[00:02:36] Joe Colantonio Hey Mark, welcome to The Guild.

[00:02:37] Mark Creamer Hey Joe, thanks for the intro and happy to be here.

[00:02:41] Joe Colantonio Awesome. I think before we get into it, maybe you could talk a little bit about your journey, how you got into testing?

[00:02:46] Mark Creamer Well, I got into testing, when I joined Conform a little over 12 years ago. And I've been involved in a variety of different software companies but ConformIQ has been focused on delivering high technology, advanced testing solutions for over 20 years. And I joined 12 years ago.

[00:03:12] Joe Colantonio Nice. What's interesting is, I believe ConformIQ has been using AI for a while now, but recently with the rise of Gen AI, I'm just curious to get your opinion on where we were before and maybe how Gen AI, how it changes things or how it doesn't change things, or how it might be different from what we've seen in the past?

[00:03:32] Mark Creamer Yeah. Interesting question. Something I've been contemplating for a while now. Certainly, Gen AI is the proud new baby in the AI world. But, interestingly, AI has been around forever even while I was working on my MBA in the early 80s, I did a project on AI. The technology's been around a long time. The question is why is it so much more prevalent and so much more interesting today versus what was it in years past. But interestingly, AI have been used in testing and actually it's been used in ConformIQ's accused products for over 20 years. The question is or the point is what type of AI? And the kind of AI that I think people are familiar with but don't kind of consider as part of the new revolution of Gen AI are things like natural language processing, things like visual recognition systems. And interestingly, one is Symbolic AI. Symbolic AI is actually kind of the technology that was most visibly deployed probably in the late 90s when IBM built a product called the Deep Blue, which was the system that beat Garry Kasparov in chess. And that's technology that we've had in our product for years. And the way I look at it versus the older technology or the older AI versus the new AI is certainly the older AI was more buried into the solutions that were delivered and not kind of the leading, hey, everything is AI type recognition that Gen AI has in its product. And the way I look at it, we can figure out how to leverage the value that AI has been delivering for years. And I can talk about that how I think it compares to Gen AI and then talk about the combination of the two and how we can deliver even more value to our customers that need to do efficient, high reliability testing.

[00:06:02] Joe Colantonio Nice. You make a great point, AI has been involved in a lot of these testing tools for years, but when then Gen AI put the spotlight on it? Is it because it's easier to use and see than what people may not have known like hey, this application is already using AI because they didn't see it, though they didn't know the power of it?

[00:06:19] Mark Creamer Yeah. And I think the whole ease of interaction and the availability of Gen AI today is something that is kind of consumed and infatuated the world where before the usability and ease of access for AI was more varied into products like natural language processing and Symbolic AI, it's used in a platform or an infrastructure. Now, Gen AI is its own platform and interestingly, can do a lot of the things that the AI that's been around for a while can do. But on the other hand, it also presents some risks in some variabilities that the predictable highly reliable AI that's been used for years that anybody who wants to embrace it needs to be aware of. Symbolic AI is really more of an embedded technology. That's why Symbolic AI doesn't have the recognition that Gen AI has. Because Gen AI is, hey, it's a user interaction and easily accessible to anybody who wants to use it where Symbolic AI is a technology that's embedded into model-based testing framework or a test case generation framework. The discussions we had up front where you use Symbolic AI versus AI? Well, it's not really that way. I think we need to be clear that I don't choose one or the other. It's just that Symbolic AI is embedded. And the natural language processing stuff is embedded in these test case generation solutions where you may not know you're using AI. That's why one is prevalent or one is less prevalent and Gen AI is really prevalent because you're all about using AI versus getting a problem solved.

[00:08:30] Joe Colantonio And once again, we're not saying one is better than the other, I guess. It's a different application, but is one safer or more private than the other. Like if someone's concerned about Gen AI leaking things, I guess or training models, is there a 1 to 1, is that embedded Symbolic AI better for that or more private or safer?

[00:08:49] Mark Creamer Yeah, that's a good question. Symbolic AI is an embedded technology that is controlled. And one of, the Accenture is and the IBM's the world they call it responsible AI, it's very well defined and doesn't have the security issues and exposures that a Gen AI can have. Symbolic is controlled and well defined. If you look at Gen AI and how it's implemented internally, if you are deploying that technology within your organization, you have to ensure that the models you're using, the LLM, the large language models are constrained and implemented in a way that they don't impact customer data and those kinds of things because you are if you don't do it intelligently, you are potentially exposed there. And Symbolic AI does not have that issue.

[00:09:49] Joe Colantonio That's a good point. When you see discussions around AI and testing, a lot of times people pick on Gen AI for things like hallucinating. And I think they equate all types of AI with Gen AI and sounds like Gen AI is just a sliver of the whole AI pie, and that you've been using it for years reliably within the product to sounds like. Is that correct?

[00:10:09] Mark Creamer That's correct. Yes.

[00:10:11] Joe Colantonio Nice. So maybe some examples then of how you were using it, like for testing specifically before Gen AI?

[00:10:18] Mark Creamer Yeah. So yeah. So you can relate it to, but we've been using it for automated test case creation for years. Gen AI does that. Yeah. Interesting. The other thing is the automatic script and scripting generation for going to automated test execution. We've been using NLP natural language processing to get to that in an automated way for years. And if you look at the test case generation that we do with the Symbolic AI as some unique characteristics. One is it creates a deterministic set of test cases. And what does that mean? It's predictable. You present the problem, the fundamentally, the requirements into your environment. And based on those requirements, it'll provide you 100% coverage of those requirements for functional testing and do that deterministically. What that means is I have the same problem set, I get the same result. With Gen AI, you get basically a different answer one day versus the other. Hopefully, no hallucinations in there. Yeah, it's part of the process. But the Symbolic AI environment, you get it predictable. And our algorithms and approaches actually deliver an optimized set of test case, not just test cases, but we pretty much guarantee the least number of test cases to provide 100% coverage of the testing requirement that the test is on our head. You can't get that with AI.

[00:12:10] Joe Colantonio I would think you're kind of in a pickle because a lot of these tools have come up with test case generation are Gen AI, and they give it that experience. But you've been doing it for years using Symbolic AI, which sounds like it's built better for that use case. How do you explain that to someone? I mean, you just explained it great right now, but is there like an example you give like here's using Symbolic AI compared to Gen AI. Here are the differences.

[00:12:35] Mark Creamer Interesting question. So yeah, I think you have to step back and say, okay, what are the two approaches do? Test case generation okay. And what's your goal with test case generation? Because if your goal is just test case generation Gen AI's perfect solution. Yeah. If your goal is to have an optimized test cases or a very focused test cases based on your particular objectives. I can't do that. That's where the Symbolic AI delivers a significant value. The other thing that's important is the predictability and the repeatability of Gen AI. So if you have like compliance requirements or financial testing requirements that present a significant amount of risk associated with the application you're developing, and you really need to do some very focused, intense QA, and regression testing. You can't necessarily use Gen AI to target specific areas like that where with the Symbolic AI, you have the ability to focus and target and optimize the type of testing that you're doing.

[00:13:55] Joe Colantonio Love it. So I like to give people some tips if they're listening. One thing is if they're looking at AI solutions, they want to do test case generation, especially if you're in a regulated environment. They should ask, I guess using Symbolic AI or Gen AI. And if it's using just Gen AI, they probably would want to be cautious, I would think.

[00:14:11] Mark Creamer My suggestion is, is they just be aware of what they get with Gen AI versus Symbolic AI. The goal is to have the repeatability and the defineability of what you're testing and how efficiently you're testing it. You don't get that with Gen AI. If you have a low risk, website type application. And I just want to do some exploratory testing, Gen AI is great. You don't need to deal with that. But look at where does Gen AI deliver value and it can deliver value complimentary to Symbolic AI, which is something that I think everybody wants to use AI should be aware of is where you will get the most value. And frankly, there are some challenges with using Symbolic AI. You have to provide a well-defined environment and model-based environment to do that. Where, hey, I can use Gen AI, help me create those models and make things a lot more faster and efficient. And doing copilot assist the people to get to the value that the Symbolic AI delivers versus, oh, let me use Gen AI to generate test case. No, let me use Gen AI to help me be more efficient in creating and taking advantage of Symbolic AI as part of the testing process. That makes sense?

[00:15:41] Joe Colantonio It does. Absolutely. What is Gen AI good at then? Like you said, is it just give you ideas then that you know or create a model that you then use to train the Symbolic AI? How does that work?

[00:15:53] Mark Creamer I think Gen AI is really good at making smart people smarter and more efficient. It's not good at making novices brilliant because we've heard of Gen AI hallucinations. Well, that would probably contribute to that level of feedback you get. But Gen AI in a sense is kind of working with somebody that gives you suggestion. And not all the people you work with are brilliant. You have to assess and really be aware of what it is you're getting from interacting with Gen AI framework like ChatGPT, where with a Symbolic AI, in a structured approach, your output is predictable and well-defined and reusable, where the quality of the product in using a Symbolic AI as I mentioned, well-defined, where it's not so much with Gen AI.

[00:17:00] Joe Colantonio And it sounds like a lot of people like you said, if you just want to test case generation for the sake of test case generation, health care, Gen AI is just going to spit it out. But if you have like areas of risk and things you want just the like you said, the optimal amount of test cases, Symbolic AI is the way to go.

[00:17:15] Mark Creamer That would be our recommendation.

[00:17:18] Joe Colantonio Perfect. All right. I think we have the gist of the differences. I'm just curious to know, how else can AI help us? I mean, test case generation is big, but are there any other areas of testing that you find AI is particularly really good at helping people with?

[00:17:34] Mark Creamer Yeah. So the way we look at it at ConformIQ, we have a platform that we know works and we've been delivering for years. And how can we make the usability of our platform more efficient. And we see a number of opportunities to use Gen AI, to make existing high test case generation and script generation intelligent automated testing frameworks much more efficient. And how do we do that? We've got some products that are focused on capturing requirements and using those requirements to build tests and so on. And then, how do you get from a BDD behavior driven development testing process to a simplified automation process, automated execution. Well, you can use Gen AI to fundamentally fill out the forms needed to go from, hey, I know what I want to build, I know how to test it, but I'm not an automation expert. But I can use Gen AI to solve that test automation process with a push of a button integrate the application under test with the tests the result of the behavior driven development, get automated execution on my own desktop. That's really cool. So that's where Gen AI compliments testing and behavior driven development. But now you think about behavior driven development, it's really kind of focused on scenarios and user stories and really doesn't take into account the entire system, from end-to-end with all the options and all the interactions and possible interactions and these high intensity, compliant and, or financial testing areas like banks and so on. It doesn't test the end-to-end scenarios. And how do I get to that and do that efficiently and effectively? Well, what if I could take those user stories and kind of combine them together into a system level model and then take that system level model and apply Symbolic AI on top of it to generate and optimize test cases that allows you to test your entire system. And that's where Gen AI can contribute significantly to getting from the user story or scenario level to the system level model. And that's where we ConformIQ are looking at providing those kinds of solutions as well. So you take the user stories that you are testing at the unit level and combine those to create a system architecture and use the intelligence of figuring out how all those user stories go together to define the system level model. And that's where Gen AI can help a bunch. And then you have the environment that now gives you the test case generation and the clarity that you need to do a very efficient end-to-end testing.

[00:21:15] Joe Colantonio All right. So I just want to explore a little bit more just to get some clarity around it. So someone has a bunch of BDD feature files. And they may be caught up in like maybe their particular feature they're releasing for that sprint. Maybe kind of forget about the overall. How does it fit with what's there already. What's going down the road. So based on that, Symbolic AI is smart enough to take it and come up with a system level model, then it can maybe show that maybe areas that they missed or areas that over covered or under covered?

[00:21:46] Mark Creamer Yeah. So they even call it, basically the Symbolic AI framework. And because it's the application of Symbolic AI that allows you to do this. But what you think about, hey, I'm working on a sprint, and I need to test the functionality for this new loan that I'm going to process. Well, that new loan fits into a loan portfolio. There's credit checks and there's all these other components of the end-to-end application that need to be considered before you actually release this new product. I do my user story and I qualify this person for the loan. And we get them signed in and whatever. But that needs to be tested in the context of the overall system which are a bunch of other user stories. And that user story needs to be integrated into the big picture. And the testing of that new functionality needs to be tested, not only in its own environment or whether it needs to be tested in the context of the overall system. And that's what we're talking about here is leveraging the work that's been done at the sprint level to drive the automated creation of test cases in an optimal way for the entire system. We can use those user stories and Gherkin descriptions and so on to create that system model.

[00:23:27] Joe Colantonio Then does the system model then create its own test automatically without having to write new feature file?

[00:23:32] Mark Creamer Yeah, very. There you go. Yeah. So you're testing at the unit level. And then the system model creates an optimized set of test cases that for testing the entire system, not just the component or the user story or scenario that you're integrating into that system environment.

[00:23:51] Joe Colantonio A lot of times when we talk about testing, a lot of people talk about shift left, shift right. It sounds like the system level model can help address both areas almost?

[00:24:01] Mark Creamer Kind of take advantage of that description of shifting left. It's more about starting right. And starting correctly where you're capturing, what it is you're building properly with the context. And that's where Gen AI can help them a lot with. You can use the natural language processing capabilities of Gen AI to automatically generate the BDD scripts and the BDD tests and the BDD descriptions in Gherkin. You can use Gen AI to do that. And by doing it and starting properly with the requirements, the testers and the developers working together, you can fundamentally drive the testing process much more efficiently because you have a well-defined description to help drive the end-to-end testing as well as the agile and testing that's required as you build with the product along the way. The Symbolic model-based approach really addresses the efficiency of the regression testing that needs to happen at the system level. And the BDD approach provides the agility and the testing that needs to happen at the unit level.

[00:25:33] Joe Colantonio I know why this idea just came to my head. I could be completely off base. Let's see. So if I have a system level model that understands the system, if someone checks in code that changes a feature that touches code B and Z, a lot of times people like just run all the tests, we don't care. Does this help narrow it down? Like, hey, this was checked in, you only need to run these two test cases. You have a hotfix on these two test cases. You're covered. Am I off base with that?

[00:26:01] Mark Creamer You're in the ballpark.

[00:26:03] Joe Colantonio All right.

[00:26:09] Mark Creamer You do it at the system level and you have the flexibility to say, oh, okay, I just updated a certain set of code and put that into the system. I don't need you in test the entire system. But I maybe need to see how the new code works with a couple of other components so I could take a piece and specify a piece of that system, generate the test cases just for that new code in the environment that it specifically interacts with, and just target the testing for that particular area. You can test that new code at the unit level. But hey, I really need to make sure it works in the context of the environment that it interacts with in the system level. But I don't want to run a thousand test cases. I just want to run 30 which test the area that it's particularly impact. You're in the ballpark.

[00:27:17] Joe Colantonio What I love about this though is kind of a hidden feature. It sounds like, I work for a large enterprise. We had eight sprint teams, eight people in a team. It was chaos. It almost seems like the system level model. Once again, I could be out based. Almost helps with collaboration or communication, where you actually have something you can point to get people talking, thinking, innovating. Is that another like maybe not off hand, but another undocumented feature of this approach?

[00:27:43] Mark Creamer Interestingly, that collaboration, is a key component I see at both ends of the unit and system level testing because collaboration AI doesn't really drive that. So you have to do collaboration, one. Two, one of the unique components that we deliver as part of our BDD solution is not only a textual description that you get through Gherkin. Coincidentally, creates a model, so you have a picture and the variable view of what it is you're testing and what it is you're building, and you get the same value at the system level, you have the visual model of how everything connects and flows. And you talk about collaboration. Well, one of the real values of having a picture, being able to look at how things are connected and how things flow really facilitates that collaboration. And we've gotten that from our customers at both levels, at the BDD level and the system level. Having that picture in the model delivers real value. Yeah, you hit the nail on that one.

[00:29:04] Joe Colantonio All right. So another one then what's again random thought. You go into like a sprint planning. You have 8 teams. Using a visual model, you could probably say okay my story actually has a dependency on this team. Is that something it can help with collaboration or help planning?

[00:29:20] Mark Creamer Oh yeah. Absolutely. Yeah. And having the model view of how things work together helps significantly. And not only at the front end, but at the back end where you can verify and validate that, hey, I built this and I did these things and I made sure that we expected this information from this group or this function that was being built. So yeah, absolutely. The visual model provides a significant value in the collaboration.

[00:29:50] Joe Colantonio I think the past few questions you kind of addressed how AI isn't really replacing people replacing task kind of grunt work and kind of making it where collaboration is becoming more at the forefront, which it always should be, rather than the grunt work. That's a message I'm getting on this.

[00:30:07] Mark Creamer Yeah. So absolutely. Yeah. I don't know that it replaces all the grunt work, but I think it more makes smart people smarter. But you got to have collaboration regardless of whether or not you have AI or not. Yeah. And God forbid, AI provide the innovation function at some point hopefully that will also come from real intelligence versus artificial intelligence. But collaboration has to happen regardless. And the frameworks and tools that we provide really enable that collaboration.

[00:30:45] Joe Colantonio Mark, I'm just jumping into the mind of a listener there. Probably sound of this sounds great, but it won't work for my application. If someone wants to check out ConformIQ, is there a way they can actually see how it works and how it could apply to their particular tech stack, maybe?

[00:30:59] Mark Creamer Yeah. It's interesting depending on where you're coming from. If you're looking at the system level model, the way we engage there is more a defining, a proof of concept. At the BDD to level, it's just go to the last marketplace and download us and start using it. So depending on the approach in the problem you're trying to solve. There's a lot of ways that you can get access. And if you're interested, we have a lot of videos and nothing better than a real time demo to take a look and see how all the approaches work together and have a discussion about where to get started, depending on where you're coming from.

[00:31:46] Joe Colantonio Okay Mark, Before I go, is there one piece of actionable advice you can give to someone to help them with their AI automation testing efforts? And what's the best way to find or contact you?

[00:31:55] Mark Creamer Well, hey, check the link below. Let's have a discussion. Let's engage. Reach out to ConformIQ.com, and, we'll set up a demo and discussion to talk about how this robust best applies to your environment.

[00:32:17] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a506. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:32:53] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Mateo Rojas Carulla TestGuild DevOps Toolchain

AI and the New Era of Cybersecurity Threats with Mateo Rojas-Carulla

Posted on 12/11/2024

About this DevOps Toolchain Episode: Today, we're exploring a topic that's becoming more ...

Discover-Future-Trends-in-Automation-at-Automation-Guild-Feature-Image

Discover Future Trends in Automation at Automation Guild

Posted on 12/08/2024

About This Episode: I'm your host, Joe Colantonio, and I am thrilled to ...

Evan Niedojadlo TestGuild DevOp

From Code to Leadership with Evan Niedojadlo

Posted on 12/04/2024

About this DevOps Toolchain Episode: Today's episode delves into the journey of transitioning ...