Leveraging AI for Robust Requirements Analysis and Test Generation with Scott Aziz and John Smart

By Test Guild
  • Share:
Join the Guild for FREE
Leveraging AI for robust AI Requirements Analysis and Test Generation webinar with presenters Scott Aziz and John Smart.

About This Episode:

Today, we're diving into the future of user story optimization and the art of turning those stories into actionable tests.

Today, we have the privilege of being joined by Scott Aziz, the visionary founder of AgileAI Labs and a renowned BDD expert. We are also accompanied by John Smart, the creator of the Serenity framework, whose expertise in BDD is unparalleled. Together, we will delve into the powerful capabilities of the new AI tool Spec2test.

Imagine a tool so refined that it not only assesses your user stories for ambiguities using an intricate 7-point framework but also offers AI-generated suggestions to enhance them.

That's not all; Scott Aziz and BDD expert John Smart join us to discuss how Spec2test fosters essential iterative collaboration, paving the way for crisp, clear requirements and generating corresponding test cases right from the get-go.

This is not just about meeting the standards of behavior-driven development; it's about exceeding them. With the tool's dual-edged sword of user story analysis and sophisticated testing capabilities—spanning functional test cases to security testing advice—you're getting a comprehensive suite that breathes life into automation.

Surprised by AI's prowess in reshaping agile teams' productivity, our guests reveal how Spec2test is a co-pilot in requirements discovery. For enthusiasts who want a taste of its power, a visual demonstration or free trial could be your gateway to appreciating its full potential.

So, gear up for an illuminating session on bringing precision and collaboration to the forefront of your testing strategies — listen up!

About Scott Aziz

scott azis

Scott Aziz has been testing software for 36 years and worked as a Practice Head in some very large QA organizations. Scott founded AgileAI Labs on a mission to revolutionize how AI can be leveraged to prevent defects across the SDLC as well as significantly increase the productivity of all agile team members. The brainchild of his mission is the development of Spec2TestAI, a defect prevention platform that innovates and elevates the agile SDLC process to develop better quality software.

Connect with Scott Aziz

About John Smart

John Smart

John Ferguson Smart is a specialist in BDD, automated testing, and software lifecycle development optimization. He is the founder of the Serenity Dojo, an online training platform for testers who want to become world-class Agile Test Automation Engineers, the author of the best-selling book “BDD in Action”, and the creator of the Serenity BDD test automation framework.

Connect with John Smart

Rate and Review TestGuild

Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Visit the AgileAI Labs website, https://agileailabs.com/, and receive 10% off using the Promo Code JOEC by subscribing to the platform.

[00:00:04] Get ready to discover the most actionable end-to-end automation advice from some of the smartest testers on the planet. Hey, I'm Joe Colantonio, host of the Test Guild Automation Podcast, and my goal is to help you succeed with creating automation awesomeness.

[00:00:20] Hey, it's Joe, and welcome to another episode of the Test Guild Automation Podcast. And today, we're going to have a special treat for you. It's all about leveraging AI for robust requirements analysis and test generation. This is an approach I haven't seen before. If you don't know, I have a new show and I stumbled across this awesome tool that leverages AI for requirements I think you definitely have to hear about. So I was lucky enough to get Scott, who is the head of this company, Agile AI labs, which has the solution that can help you leverage your defects across your software delivery lifecycle, as well as significantly increase the productivity of all your agile team members. It really does a lot of cool stuff. Scott has been testing software for over 36 years, so he really knows his stuff. And we also have John Smart joining us. You probably know him from previous Test Guild Podcasts and our conferences. He is a BDD specialist automation testing specialist, and he's also the founder of the Serenity Dojo, an online training platform for testers who really want to learn world class agile test automation engineering. And he's also the author of what I think is the best book on BDD, BDD in Action, and the creator. I always say this, what is? My favorite framework in the worl, the serenity BDD test automation framework. That used to be my go to framework when I was working full-time as an automation engineer. Really excited to have both of you on the show. You don't want to miss it. Check it out.

[00:01:50] Joe Colantonio Hey guys, welcome to The Guild.

[00:01:54] Scott Aziz Hey, Joe. Good to be here.

[00:01:57] John Smart Thank you.

[00:01:57] Joe Colantonio I'm really excited to have you. I guess, Scott, let's start with you. As I was mentioning, I'm always looking for new tools and things to report on my News Show. And I came across something called Spec2TestAI. So any time I have a founder on the show, I'm just curious to know why did you create this particular tool and what makes it different than maybe other tools that are out there?

[00:02:18] Scott Aziz Yeah, I mean, honestly, the reason why we create so obviously myself, but the team and I created the tool is because we've got a lot of passion for defect prevention. The team that we've assembled as part of the company, pretty much everybody in the company has 25 plus years of quality engineering and testing experience. Dave, along with myself, have really thought about how can we make a difference using AI and make real change, make a real difference. And so what we did was, during the genesis of the project, we thought a lot about looking across our experiences. Many of us have done, for example, consulting. And we are did we make sort of the biggest bang for the buck? It was almost always in and around helping companies, helping organizations prevent defects. Even though we're all passionate, we're pretty much all testers. But preventing defects always had that massive ROI. And so we really felt like this is an area where, hey, not a lot of people are focusing on certainly not a lot of tools or platforms. They are focusing on really shifting left. Using AI to actually do that effectively. We said, you know what? This could work. And yeah, so here we are months and months later and it's working.

[00:03:44] Joe Colantonio Awesome. And John, how did you get involved then. I know you're a BDD expert to this come across your radar and you're like, oh this is a good idea. Like how did you get involved in this as well?

[00:03:54] John Smart Well, I often get asked to take a look at AI and test automation tools, and I do. And often I don't say anything because I'm a little bit underwhelmed by that. And when Scott reached out and sort of explained what he was doing and showed me what he worked. Where? It was an early version, but even at the time showed me so the evolution of the tooling, it really blew my mind away. And also just talking to Scott, Scott felt like my long lost brother from America, because we were thinking in exactly the same way. This whole idea of that you don't actually get the leverage from automating tests after the fact. The real bang for your buck is when you're helping teams collaborate and all and discover and align their thoughts early on. And there's a lot of problems. Teams see massive problems all the time, even now, aligning their thoughts and getting their ducks in a row, so to speak, making sure their requirements are consistent and getting that communication flow going. And a lot of that comes down to expressing the requirements well and troubleshooting them initially and asking the right questions and applying AI at that early level just makes so much sense. So the product just really clicked.

[00:05:13] Joe Colantonio All right. So we keep seeing it shifting almost left to the requirements. So can you give it a small example what we're talking about? Mostly when people hear AI, they think Gen AI were just creating a script for you. But this I think is a lot more. It's a different approach. Can we talked about the requirement type of approach that this takes?

[00:05:32] Scott Aziz Yeah I'll take that. One of the things that the platform does is it really does drill down and analyze user stories or requirements. And along with that, it's not just the story itself, the acceptance criteria. For example, we produce 32 different measures from a single user story. So we're measuring 32 different attributes of a user story. Now, we use very popular frameworks to do that like invest like smart. And we have a couple of other custom frameworks that we've used. But that's really where the first like when people first jumped in the tool or they're getting their first demo, that's usually the first Aha moment. It's, oh, wow! This thing is really different because the platform is literally interrogating my user story, telling me where it's good, where it's bad, and giving me really helpful AI-based but really helpful advice in terms of how can I correct some of these problems, for example, ambiguities, right? That's one of the biggest things that the platform points out. We have a seven point ambiguity framework. So seven of those measures, seven out of the 32 that I mentioned are very specific to ambiguities. And the platform actually not only tells you that, hey, you've got some ambiguity problems here, it fixes it for you automatically. Now it tells you exactly what it's fixed, tells you exactly what it has gone through and changed so that the user's always in control. But it does that for you automatically on your behalf. So yeah.

[00:07:09] Joe Colantonio Awesome. So John, BDD expert, you wrote the book on BDD in Action. Can you explain how this type of solution that can help make someone's automated test better?

[00:07:19] John Smart What do I find in this solution? This approach? Is that it? We like to call it the Fourth Amigo because it really is an iterative, collaborative approach. It forces you to go through several steps and to reflect on the requirements and clarify them if they need to points out, hey, here are some issues you probably should address. Here are some things that aren't clear. Are you sure this is what you meant? And you work your way down through you don't just say, okay, here's a requirement. Give me a test script. You work your way down through the requirements to user stories to acceptance criteria. From the acceptance criteria, you can identify test cases. You can identify. You can go to a given when then if you want. But the real value. And that does help to be clear. But the real value is all that whole flow of getting the step by step approach and being able to collaborate and iterate on that and getting feedback at each stage. Because what I find with ChatGPT, for example, I've done a lot of experimenting with that and it's getting a little bit better, but you give it a user story and you ask for some acceptance criteria. It will go off in a direction that it chooses. That may or may not be the direction, but once it starts, it's very hard to stop. Whereas this approach, where it's really pointing out the things that one need to improve at each level, and with all these different criterias you don't normally get. So from a BDD perspective, it really is a great way to help collaboration around those requirements. And that what I find is and this is what I find when I'm coaching teams, the biggest single way to get teams doing better automation is to get them to do better requirements. After that, the tooling is natural. We work in an industry with smart people. Once you figure out the requirements and show them how to write good acceptance criteria, the rest flows really naturally. You have a few conversations and they're off and running, but if you don't show them how to get the requirements that don't help them in that early stage, it's much harder and they can get stuck in a rut and get themselves into a lot of trouble. So the automation perspective is really where you're leveraging that power of collaboration early on, and that's where you get the benefits.

[00:09:31] Joe Colantonio It sounds like a really embraces the true spirit of what behavior driven development was created for. And that is the collaboration and the communication. A lot of times tools come in as automation for us, but this sounds like it's literally BDD the way they meant BDD to be used. I don't know, Scott, if you agree with that statement or not.

[00:09:49] Scott Aziz Yeah, absolutely. It's interesting because I think when John and I first started collaborating, even I'll admittedly say I'm not a BDD expert. I even was able to learn some really tenets of BDD that I've been able to help influence back into the AI training. In fact, John and I recently collaborated over making some enhancements to some of the BDD output in the tool. And I mean, as you know, he's got great insight and a great handle on all of that, but it's interesting I think in the realm of what you don't know and what can be done because it is really of such a powerful mechanism. And when we can build at least some of those best practices into an AI tool, build some of those trainings into an AI tool and help at least automate some of that process. There really is a ton of value. I absolutely love what John is talking about. Yeah.

[00:10:54] Joe Colantonio This is something I think people actually need to see to really appreciate it. So I know you do have like a free trial, but I'm going to try it out orderly because it's just video. And also talking heads only might be hard for people to envision. So how does it work? Someone has a user story they feed in into your solution and then it automatically runs it against creates test cases. I think it has edge cases. It could come up with mathematical models, test coverage. I think even has like traceability as well. So how does that all work? Like, can you just say, like, I still don't know if people are hearing this. They still get the gist of what this is doing.

[00:11:31] Joe Colantonio Yeah. And by the way, people you know can jump over to our YouTube channel, take a look at it as well Agile AI labs, but because we've got a number of videos out there, some of them are 30-35 minutes long. If we can literally walk through. But what it does is there's really two major components to the tool. A lot of what you just spoke of is on the testing side, but what happens first is the user story gets distilled and measured. So that's sort of what we call step two. That's all of the analysis around the requirement. And we do things like not only provide them with all of that feedback and advice, as I mentioned, and given those 32 measures. But we actually change the requirement. We enhance what we called enhance the requirement. So that's done automatically on the platform. And then we generating things our assets like security requirements. All right. So we have models around helping to generate that. We kind of step through and have other features that do things like cross analysis from a user story perspective. So for example, we don't just look at your user story, we look at the other adjacent user stories that have been defined in your project. That's really helpful when you sort of break out of the mold of the existing user story you're trying to analyze and then look across kind of do that adjacent scan. So we're doing things like that on the analysis side. But then we sort of graduate to basically what they do is they look over all that material and then they basically click a button and graduate to the testing side. And the testing side has, as you mentioned, functional test cases, happy path test cases, edge negative security test cases. We're doing a lot with security right now. They get a lot of pointed OWASP security testing advice as well as advice around OWASP ZAP. So in addition to that, they get, as you mentioned, the mathematical models, they get kind of a view of from a test case perspective, what is their coverage. And when we talk about coverage, we're talking about coverage and traceability back to the requirement. Obviously the tool knows nothing about their code. But we're tracing that back to the requirements. So those are some of the things that we're producing. There's a lot actually in the platform.

[00:13:54] Joe Colantonio John, is there anything that you found that it's producing that you thought? Well that it's really neat compared like you said, you were underwhelmed from all the solutions. But this one, you're like, wow. Like this particular feature I think is the killer, killer feature.

[00:14:05] John Smart It's the layering that I really like. The fact that you start with a high level requirement, you break it down, it helps you structure your thinking around the requirements, and at each stage, it'll give you feedback about the quality of the requirements that you write. And so you come up with a high level of feature. You can break that down into stories and it gives you feedback on those stories. Most of the tools they just started okay, I'm going to take a piece of text or user story or whatever and spit out some acceptance criteria or spit out some test scripts or not even just, let you enter the test scripts, but they don't help you actually think about whether you're writing the right test and whether you're coming up with the right requirements.

[00:14:47] Joe Colantonio Awesome. All right. I guess I'm why I'm hopping on this is, traceability. I used to work for a health care company and also insurance company. And traceability was really critical for audits like the FDA come in. And it was always hard to say, okay, we run these tests and they go to this requirement. And any time the requirement change, we had to make sure we had the right tagging for the test. Can this particular solution help with traceability? Especially for an audit? Is that something a use case you would see or would this be good for?

[00:15:18] Scott Aziz That's one of the assets that we focused on. And that gets automatically. And basically what we're doing there is we're creating a table, if you will, of all of the acceptance criteria. In other words, all of the points that must be interrogated right from a testing perspective. And then we have a different AI model that takes all of that data, all that information, and compares the output test, the generated test with that, and then it draws conclusions. It summarizes whether or not there is 100% traceability if there isn't, and sometimes it will not, we will tell you, hey, you've got 96% traceability. Here's your gap. And so you want to fill that gap in, but it provides paragraphs and paragraphs around and detailed information around all of those points of traceability, which is really quite helpful. Again, all of that gets generated within the course of two and a half to three minutes, in addition to all of the other assets that I mentioned earlier. So it's one of those things that it's such low touch, and there's so much potential value that people can get out of it that it's probably one of the bigger, at least on the testing side, it's probably one of the bigger feedback points that we have been hearing in terms of, you know what? This is really quite useful. This stuff would take me 25, 30, 40 minutes to write. I get it automatically with the output of all of my tests. You guys are saving me a ton of time.

[00:16:44] Joe Colantonio What happens once you get that output though? Is it automatically going to create automated tests for you, or is that like another tool that would come in and read us to do that? Or is this not necessarily just for automation. And also it could be someone might want to use exploratory testing. Say with security, they may not know how to use these tools for security. Can it go through these user stories. And it that much detail where they can perform them by manually?

[00:17:06] Scott Aziz Yeah. So what the tool does is it provides very detailed manual test cases. That is sort of our biggest selling point, if you will, of the platform. Our test case generation engine is very mature. Reason why I say that is because we use different mathematical models to actually generate tests. What you're getting is the base, the foundation of all those manual test cases automatically generated. What the tool does, what the platform does is also automatically generate a starter automation test automation script. What you can do is you can choose your tool, Selenium, Appium, etc. whatever you want robot and choose the language that you want the script output in and it will do it right. And basically what you get is every single functional testing, every single step in those functional tests are coded in that what we call sort of that base template Selenium output. Potentially, you're going to get a couple hundred lines, in fact, most of the time you usually get a few hundred or 200, 300, or 400 lines of code that mimic step for step exactly what you see in those manual test cases. Again, that becomes a huge time saver. However, that's not as you know, that is not all that you need as a test automation engineer because you need, for example, exposure to the Dom, you need to be able to automate all those low level components. And you cannot do that unless you have exposure to all of those web elements. Now our tool, the platform doesn't have that exposure. It's really just a starter framework. And we need to emphasize that what it produces for you is a starter framework for, again, in Selenium, in Appium, in Robot. But it helps I mean you're going to save multiple hours again as a test automation engineer just by having that produced for you automatically. But of course it's not going to get you though the whole way. We do have partnerships with other tools right now that actually allow us to take our manual test cases, which are very rigorous, very thorough, feed them into their tool, and in one click you can generate automated test scripts, no code, low code, automated test scripts. But our tool itself doesn't do that. It only brings you so far.

[00:19:28] Joe Colantonio So John, you're the BDD Serenity expert. Like how would you use this output? Are you able to convert it? If you're in your dojo and you're teaching your folks about BDD and you say, hey, here's how to get better requirements, here's some of like, how do you then convert that into an automated test?

[00:19:46] John Smart I think the beauty of it is that it actually produces half decent Gherkin scenarios as opposed, which is what a lot of the tools struggle with. So the quote I say half decent, tongue in cheek, but to be really good. So really nice Gherkin scenarios from those acceptance criteria. And then the template that's just template Java or JavaScript or what it Python or whatever language you can use. You can then fill in the gaps. Since Scott was talking about Selenium, but it's very minimalistic code. You can fit anything you want into it. It doesn't actually commit and apply, so we'll talk. Scott was talking about some of the BDD best practices and whatnot. And the Gherkin that a producer does stay at a business level, does keep things at an implementation agnostic a lot of the time, which means he could be automatically UI, or he could decide to go through an API depending on the requirements you're working with. But after you have that Gherkin scenario that saves you basically the one layer, you can then take that. You'll need to tweak it. I mean, they only need to tweak the code that you get, but certainly like you get a few hundred lines of starting code which compiles and works. And you can use that as a starting point for your code. And then it's relatively easy to take that away and then just start writing, just the way you normally would.

[00:21:11] Joe Colantonio Is there ever a disconnect? Like I have the requirements. I then generate my tests in a different framework. Test become stale and old people modify them. They don't necessarily then trace back to the original requirements. Is there any type of traceability once it's out of the hands of, say, Spec2TestAI? Scott I don't know if that makes any sense, but.

[00:21:30] Scott Aziz Yeah. It makes complete sense. No, there's no traceability because we recommend strongly that people stay true to the requirements portion of the platform. And what that means is if you're going to go off and change your tests, we understand that. But if at all possible, and our recommendation is, again, if you're going to enhance your test because you are missing, for example, details, maybe there's an edge case you're missing. It's not because the AI has missed it. It's almost always because we've proven this through testing. It's almost always because you did not provide enough details in your story to inform the AI that, hey, that edge condition is actually something I want tested. And so what we recommend is people try and stay true to keeping that linkage because as you know, requirements change, stories change, business needs change so often during the process that if you've got that linkage, all you need to do is come into the platform, click the button once you have a new business requirement. Because the way our tool works is you're either putting, you're either copying pasting the user story in or we have a JIRA plug in, you're pulling your updated story directly from JIRA. The business gives you a new story. You just click the button and you're going to get all the new analysis. And then you click the second button. Now, you're going to get all of your testing assets based on the change. And all of that just happened in less than five minutes. Whereas, it would have taken them hours and hours to have to write that stuff by hand.

[00:23:01] Joe Colantonio Nice. So if someone had existing user stories and requirements, they would run it through this. It also, I guess we alluded to it, it would point out gaps. Like say had a feature, but I had maybe like one task covering it. And I had one other feature like a log in. I had 20 test covering and were able to say, hey look I should you should generate test for this area because you have no idea any coverage on it.

[00:23:24] Scott Aziz The way it works is it will inform you and actually create the tests. If you're, for example, in your acceptance criteria you have outlined or in your in the context of your story, you've actually outlined the fact that you've got feature A, feature B, etc. and there are these permutations around that feature that exists. So with the AI will pick that up and say, okay, well you've got these permutations I need to create test for that. The engine itself will do that automatically. But one thing it won't do is take a look at your existing test cases for example. You can't actually feed your existing test cases in. It's not meant to do that. It's meant to everything we do. It comes from an enhanced requirement. This really the starting point of our platform is that enhanced requirement that we were talking about earlier.

[00:24:19] Joe Colantonio Nice. John, I know you develop a lot. When you saw this, is there anything you'd think like this is at stage one, but you can see a stage two if they just did this tweak. Is there anything that you could see added to this that really could make it even more powerful?

[00:24:35] John Smart It evolves so quickly. The thing, the new features are coming out all the time, and I have been suggesting things. A lot of the interactive nature of it is something I like to see evolve even more. The fact that there is the you can ask questions now, can't you? At each stage.

[00:24:52] Scott Aziz Yeah, questions will get generated. Yeah.

[00:24:54] John Smart Yeah. So basically it's that sort of interactive approach that I really like to see. And I'd like to see enhance so that each stage you basically like we said we're calling this The Fourth Amigo you want it to play the role of The Fourth Amigo in the Three Amigos session and help you have conversations and enhance your conversations. And that's actually I work with a lot of teams and I from well, not from the UK or the U.S who don't necessarily have English as a first language, and it's not necessarily an easy task to ask them to write acceptance criteria that are really well written, very clear and, unambiguous and, to that sort of interactivity where you can sort of propose a draft requirement, it will come up, hey, here's some areas you want to improve or this is ambiguous. And then also, I don't know whether Scott mentioned that it doesn't just proposed changes, but it proposes an enhanced version of the requirement. Here's your draft requirement. Here's an enhanced version with the wording rewritten to make it a little bit less ambiguous or a little bit clearer, a little bit more complete. And that is a real time-saver for like the teams in non English speaking, non first language English speaking countries. That may struggle a bit to get that sort of, even if they can do it, take them longer and this can really speed up that process. And that's the interactive side of things that I'd like to see enhanced even more.

[00:26:29] Joe Colantonio A lot of times if people hear AI think I'm being replaced. This sounds more like an enhancement, like an assistant, a buddy that's going to help you find ambiguity, get communication. It's going to start communication rather than shut it down. So I did all the communication for you. Here's your requirements. Here's your test cases. Right?

[00:26:45] Scott Aziz Yeah.

[00:26:46] John Smart Exactly.

[00:26:47] Scott Aziz Yeah. Exactly. The way the platform was architected from the beginning is it is user-centric, it's human-centric. Right. And it's meant to be a productivity tool. It's never meant to be a replacement. And it's great because there are so many assets like it generated. You can as a user of the platform, you can pick and choose what you want to focus on, what you want to use, what you don't want to use. There are points in the platform where you can just edit what the AI has produced. You're always in control as a user. You never have to accept any of the AI changes. But the other great thing about the platform is that we show you everything that has been done. As John was mentioning just a minute ago, there's an AI enhancement tab. Not only do we show you the enhancements, but we side by side, we show you your original requirement that you entered your story. And then right next to it on the right hand side, we show you all the enhancements and we bold and highlight literally every single change that the AI engine has made. So you've got clear transparency always. And you feel when you're using it as a user, you feel like, yeah, I'm in control. It's a massive productivity booster, but I'm in control, not the AI. And that's we want to keep that going. We're big believers in sort of human-centric AI.

[00:28:06] Joe Colantonio Absolutely. And it seems like encourage better testing. Like I said, I was happy to see security test cases as one of the outputs probably a wish list. I know you're just starting can add everything. Like does it cover performance testing and or accessibility testing. I know accessibility testing has been really hot the past year, so don't know if that's something.

[00:28:26] Scott Aziz Definitely on a roadmap for sure. Accessibility. Yes. Performance. We have been thinking about it. We honestly haven't had a lot of clients who have come to us and asked for that yet. But we're definitely open to doing something like that as well.

[00:28:40] Joe Colantonio What are your clients saying? Like so unfiltered. They use the product. Like any case studies are like, well, I knew this tool I created to help people, but I didn't know once it got in while it could do this and actually did that. Whatever.

[00:28:54] John Smart Yeah, most people are wild after the first demo, but once they've gone through a POC, the teams that we're hearing from a feedback perspective. I didn't realize AI could be trained to do something like this. I had no idea that we could literally take our agile process, introduce a platform like this, and boost productivity across the rest of the agile team, because, I mean, as you know, a lot of the focus today is on developer tools, coding tools, AI copilots well, what about the rest of the agile team, right? What about your project manager, your scrum master, your SMEs, your business folks and you quality engineers, your testers? And so that's usually what we're hearing back from the teams. We hear a little bit of it during the demos. We hear some of that wow factor. But once they get their hands on it, once they've done a POC and then move to a purchase, it's really great, great to see. So yeah.

[00:29:48] Joe Colantonio I'm hearing it almost like it's an agile productivity booster, an AI agile productivity booster. Just spitballing titles here for the podcast.

[00:29:57] Scott Aziz Yeah absolutely it is.

[00:30:00] John Smart It's copilot for requirements discovery.

[00:30:02] Joe Colantonio Oh okay. Even better. Yeah. Love it. Love it.

[00:30:05] Scott Aziz Good too.

[00:30:06] Joe Colantonio Yeah yeah for sure. All right, John, I know you do a lot of training and I believe you have like a dojo. Do you actually use Spec2TestAI in any trainings? If someone really wants, Hey, let me really try this with an expert in a real-world scenario, so I can learn it even better.

[00:30:22] John Smart We do absolutely. We run both the Serenity Dojo, which is for individual testers wanting to get up to speed with agile test automation. We do sessions each week on BDD, and we often use AI tools like Spec2Test and including Spec2Test And we also run a tailored workshop for teams called the BDD accelerator where we apply these techniques for each team in their own domain with their own problems. And we do a lot, again, a lot of work with AI tools and Spec2Test in that space. So if teams are looking to adopt BDD, that's definitely a good way to go.

[00:31:02] Joe Colantonio And how do they get there, John, to learn more?

[00:31:04] John Smart The best way is just to get in touch with me directly. So there will be some links I presume under the webinar. But the best way is just to get in touch.

[00:31:15] Joe Colantonio Definitely check out all the links in the show notes and learn more. So Scott, one thing I know a lot of people are concerned about are is ChatGPT leaking company secrets and then putting all the requirements in ChatGPT and training it somehow and then it gets in the wild. Is this is your solution to using ChatGPT in the background or like it's more than just ChatGPT. How does this work? I guess a lot of people be concerned about it.

[00:31:39] Scott Aziz Yeah, absolutely. We would never use ChatGPT. We we're working with like literally some of the largest companies in the world right now, platform wise. And security is absolutely imperative. So we absolutely stay away from public GPT like that. ChatGPT any mechanism where there could be a security risk, security breach, you never want your company data, your IP information going into any of the AI training models ever. And I mean, that's just an unforgivable offense from a corporate perspective as you know. So yeah, we're highly secure. Anything that we do on the backend is through a secure API mechanism. And we keep steer away from any potential GPT like element.

[00:32:26] Joe Colantonio Okay. Scott and John, before we go, is there one piece of actionable advice you can give to someone to help them with their AI BDD testing efforts? And what's the best way to find contact you or learn more about Spec2TestAI? Let's start with you, John, we'll end with Scott.

[00:32:41] John Smart I think the biggest thing to get the bang for your buck out of this whole BDD agile test automation process is just to make sure you are having those conversations. And that's where Spec2Test really does shine, is helping people collaborate and have those conversations and make those conversations smoother and more productive, even if people are not necessarily in the same room. So the advice is make sure you're having the conversations.

[00:33:10] Scott Aziz Yeah, absolutely, in terms of learning more, please, agileailabs.com and our YouTube channel. There's some great demos out there agileailabs. But in terms of advice, what I would recommend is there's a lot of really interesting experimentation you can do with the public GPTs and I really do encourage people to do that, especially testing folks, because a lot of the experiments that you could potentially run may actually help you in your day to day work. So, for example, I'm going to make a recommendation right now, if you have the time as a test engineer, absolutely, try and experiment with some more of the advanced design mechanisms around creating a prompt. And what I mean by that is go ahead and try to, for example, take some sample requirements, make sure the sanitize because there are security issues, but take some sample requirements from your day to day job and experiment with building some of those prompts up in terms of potentially trying to produce test cases. I think you can get to the point as a test engineer where you may actually be able to produce some interesting results and useful results that would likely help you in your day to day job. At heart, I'm an engineer, I love to experiment, and I absolutely encourage people to experiment. There's a lot of interesting things that they can generate just using the public GPTs.

[00:34:40] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a492. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.

[00:35:16] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

SimpleQA, Playwright in DevOps, Testing too big? TGNS140

Posted on 11/04/2024

About This Episode: Are your tests too big? How can you use AI-powered ...

Mudit Singh TestGuild Automation Feature

AI as Your Testing Assistant with Mudit Singh

Posted on 11/03/2024

About This Episode: In this episode, we explore the future of automation, where ...

Eli Farhood TestGuild DevOps Toolchain

The Emerging Threats of AI with Eli Farhood

Posted on 10/30/2024

About this DevOps Toolchain Episode: Today, you're in for a treat with Eli ...