The Future of DevOps: AI-Driven Testing with Fitz Nowlan and Todd McNeal

By Test Guild
  • Share:
Join the Guild for FREE
Promotional graphic for a TestGuild podcast episode titled "The Future of DevOps: AI-Driven Testing" featuring Fitz Nowlan and Todd McNeal, supported by SmartBear.

About this DevOps Toolchain Episode:

In this DevOps Toolchain episode, we explore the cutting-edge junction where AI meets software testing. Join host Joe Colantonio, Fitz Nowlan, and Todd McNeal as they unravel SmartBear's game-changing Reflect integration with Zephyr Scale. Discover:

🧠 AI-Powered Testing: Peek into a revolutionary leap in test automation as our guests detail SmartBear's new AI integration with Zephyr Scale in Jira. This pioneering move makes waves, promising to transform manual test cases into automated successes. With Reflect AI's role in turning English language sentences into precise, intent-based actions, manual tedium may soon become a relic of the past.

🔍 Augmentation, not Replacement: Grasp the essence of AI as an augmentative force, propelling teams towards unparalleled testing coverage and confidence. Gain insights on how AI can amplify your DevOps cycle without sidelining the irreplaceable human expertise.

💥 Leading the Charge: Be the first to witness how DevOps-first companies are spearheading AI adoption in daily workflows, setting precedent for the industry. Our hosts discuss the paradigm shift—the evolving role of testers and the relentless pursuit of quality software.

And much more!

Don't fall behind on the DevOps innovation curve. This episode promises to be an enlightening journey for tech enthusiasts, testers, software engineers, and DevOps practitioners alike. Listen up!

Check it our for yourself now:  https://links.testguild.com/AV3a0

Also check out:

Preparing Your Team for AI in Test Management 
Run Automated Test with Reflect

Resources

TestGuild DevOps Toolchain Exclusive Sponsor

Sponsored by SmartBear the confidence behind your code.

About Fitz Nowlan

Fitz Nowlan

Fitz Nowlan Director of Engineering at SmartBear. Fitz is a software engineer and founder. He earned his PhD from Yale University in CS with a focus on networking and distributed systems. He's worked at both big tech (Google and MS) and startups, and co-founded Reflect, which was acquired by SmartBear in 2024. Fitz has built back-end systems and managed engineering teams, while most recently leading AI integrations in B2B SaaS products.

Connect with Fitz Nowlan

About Todd McNeal

todd-mcneal

Todd McNeal Director of Product Management at SmartBear. Todd is the co-founder of Reflect, a test automation tool that can execute manual test cases using AI, which was acquired by SmartBear in 2024. Todd is passionate about making test automation easy to create and maintain.

Connect with Todd McNeal

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Hey, it's Joe, and welcome to another episode of The Test Guild DevOps Toolchain podcast. Today we'll be talking with Fitz and Todd all about Gen AI trends with testing and DevOps. Really excited to have them on the show. If you don't know, Fitz is a software engineer and founder. He earned his PhD from Yale University in computer science, but they focus on network and distributed systems, so he knows this stuff. He's worked at big tech companies like Google and Microsoft and startups and co-founded Reflect, which was acquired by Smart Bear in 2024. You may have heard of Reflect, they actually, Todd was one of the other co-founders, he has been on the automation podcast in the past. He did some webinars, got really great feedback on this tool. So really excited to have Todd join us again. Once again, he is a co-founder of Reflect, a test automation tool that can execute manual test cases using AI. Really cool stuff. And it was so cool that Smart Bear acquired them in 2024. I think it's the beginning of the year, and Todd is really passionate about making test automation easy to create and maintain. I really think you're going to join us in how automation and testing works into the DevOps tool chain. You don't want to miss it. Check it out.

[00:01:26] Hey, if you're app is slow, it could be worse than an error. It could be frustrating. And one thing I've learned over my 25 years in industry is that frustrated users don't last long. But since slow performance isn't sudden, it's hard for standard error monitoring tools to catch. That's why I think you should check out BugSnag, an all-in-one observability solution that has a way to automatically watch for these issues real user monitoring. It checks and reports real-user performance data in real time so you can quickly identify lags. Plus, you can get the context of where the lags are and how to fix them. Don't rely on frustrated user feedback. Find out for yourself. Go to bugsnag.com and try it for free. No credit card required. Check it out. Let me know what you think.

[00:02:19] Joe Colantonio Hey, Fritz and Todd, welcome to the Guild.

[00:02:23] Fitz Nowlan Hey, thanks so much for having us. Glad to be here.

[00:02:25] Todd Mcneal Thanks for having us.

[00:02:27] Joe Colantonio Great to have you back, Todd. Good to see you, Fitz. So, before we get into, is there anything I missed in your bio that you want the guild to know more about?

[00:02:34] Fitz Nowlan Nothing on my end. No, that I think that sums it up.

[00:02:36] Todd Mcneal Yeah, I don't think so.

[00:02:38] Joe Colantonio Yeah. Awesome. I may have ask Todd this. So I always ask founders, Fitz, what got you into this particular space? Why Reflect?

[00:02:47] Fitz Nowlan Yeah. So Todd and I, for one part, I guess that was relevant for the Bios. We worked together for five years at a previous company, and we were both directors of engineering there, managing teams, and we were just constantly doing this manual regression tests, every time we would push out new code to our, we had a B2B SaaS offering that Curalate was the name of the previous company. So we would push out new code and we would run through these manual regression tests. And it just felt like we should be able to record those workflows and run them back on demand. And that was really the crux of it. So we set out to build that.

[00:03:21] Joe Colantonio Love it. So obviously it's obviously successful companies. You were acquired. They saw the awesomeness that what you created actually adds a lot of value. So I guess my question is, nowadays when I started software testing looked a lot different. With AI, there's been a lot of hype. Just curious to get your thoughts on beginner in testing, what is the next two years look like compared to where I may have started? How much should I really know about AI? What is hype and what is real? Big question, but maybe one of you can, both of you can handle it, but whoever wants to take it first?

[00:03:55] Fitz Nowlan Yeah, that's Todd expert. Todd's expertise.

[00:03:58] Joe Colantonio Oh, yeah. Yeah.

[00:03:59] Fitz Nowlan He'll walk you through the lifecycle.

[00:04:01] Todd Mcneal So I think in technology, things are constantly changing. That's the constant. It's change. I think nowadays, if I were a tester, I think it's really a mix of keeping up with the latest technologies, AI being kind of the big one and then also getting back to basics and really focusing on the value that tests provide, which is delivering quality software. So I think that's the thing that's not going to change is the more that you as a tester can deliver that value for your company and for your users, the more valuable you're going to be to the company. And it's just the methods of doing that are evolving now.

[00:04:39] Joe Colantonio All right. Next question that comes to mind is a lot of companies are DevOps first companies. And a lot of the responsibility falls on developers to do pretty much everything. Is that something you've been seeing and how much of them actually care about automation? And how does AI maybe assist them with their day to day activities?

[00:04:57] Fitz Nowlan I would say I definitely agree with you that there's a huge trend toward developers owning DevOps and maintaining their own infrastructure. Even in terms of the AI influence, I think it's still very early innings. And I think what you saw from a lot of companies in the last year was a chat bot, basically some chat with your infrastructure or chat with your account settings or configuration. And those were things that you could have just found on the settings page. Or just by loading the web app where we tried to differentiate and where I think in the future the real killer apps will appear is when AI perform functions on behalf of the user. And so when the AI doesn't just tell you something, but it does something. And that was the core of our AI integration in the last year. And I think that's what is delivering value for our customers. And I think that's where again, the killer apps, the really the differentiated AI-based companies will be found in the next couple of years.

[00:05:53] Joe Colantonio So I don't know if you rebranded. Is it still called Reflect?

[00:05:57] Fitz Nowlan Yep.

[00:05:57] Joe Colantonio All right. So for the folks that may have missed it, like I said, I'll have some links to the webinar that Todd I did back like last year. What is Reflect like when we're talking about AI and automation like what is Reflect? How does Reflect see AI? What automation, what your special sauce say?

[00:06:12] Todd Mcneal So yeah, Reflect is a web automation tool. And there's many tools for web automation like code base tools like Selenium and Playwright and low-code no-code tools where that would be a category that Reflect is in. I think what's differentiated with Reflect is that we starting middle of last year, we really went all in on Generative-AI and using that both for self-healing, but also the ability to describe the test in English language. One thing that we observed being an industry and having this in market for a couple of years, is that a lot of times when you're automating something, especially in something like record and play, it can be too specific. And what that causes is false failures, basically flaky tests where the thing that's recorded when you click on a button is a selector, or maybe a couple of different selectors, but it's super duper specific, and any change that button could cause it to fail, whereas something that represents more of the intent of the action, like click on the log in button like literally that sentence is actually more correct because then as long as the AI can interpret that, it can take that sentence, click on the login button and perform that action as a human would. If there's a button that says login, it will click on that wherever it is. But if there is a button that says register and no other buttons, then it should fail the test like a human would.

[00:07:42] Fitz Nowlan Yeah, another really good example there along those lines, Joe, is if you imagine we're no code, but with Reflect AI, we're now actually no manual too, because you used to have to perform the action to get the record in play tools to work. But now you can just describe what you want done with precise intent, and our AI can interpret that and then perform the action on your behalf. So a really classic example here is if you were to manually click on Jane Doe, the second entry in a table, the record and play doesn't know. Did you mean to click on the second entry in the table or Jane Doe wherever that entry appears? Whereas if your intent was click on Jane Doe, then the AI can interpret that and can find Jane Doe. Similarly, you could express an intent of click on the second row and the AI would always click on the second row. So that's where intent is actually more precise and simultaneously more robust and resilient to application changes.

[00:08:36] Joe Colantonio Absolutely. And along those lines, Todd, when we first talk to you, this was before, I think you called the multimodal AI came out, where uses both audio and video to help with automation. And I guess Fitz touched on this where automation can now do things well, maybe just bubbling up insights, but actually act on your behalf, is multimodal AI a thing now? Did you baked that in since we last talked? And how does that solve maybe other challenges we may have seen with automation in the past?

[00:09:02] Todd Mcneal Yeah. So that's something that we've incorporated into the model. And it's sort of like the just the beginnings of it. But it's something that we're really we're working on now adding it in, kind of full force. The reason that multimodal is so interesting, especially in a testing context, is that if we think of a human tester, the interface that the human tester has for testing a web application, or really any sort of application like a mobile app or desktop app, is your eyes and your keyboard and your mouse. That's a universal interface. And so the vision AI kind of is the final piece of the puzzle for having automation interact in that universal interface. Keyboard and mouse is pretty easy to automate, but to actually view and interpret what's presented to you on the screen up until now has been a very difficult and state of the art really couldn't solve that. But with multimodal, the promise is that it can. And so that really anything that has that universal interface of screen and input devices should be able to be automated with this approach.

[00:10:12] Joe Colantonio Do you see this as replacing testers, like what's the role of AI that you see in this space for someone that's interested in software testing and developing? Is it a replacement? Is it something that's an add on that helps. It's almost like a sidekick. How do you see it?

[00:10:28] Todd Mcneal I see it as an augmentation. So the best AI tools, in my opinion, are ones that fit into your existing workflow, like Copilot for developers helps replace basically having to look stuff up on StackOverflow or the Java docs or something like that, and it makes you build things faster with the same level of quality, but it's still human in the loop. With testing, we think that's the same thing. You still need humans to make judgments on is this correct or not correct describing what functional requirements are? You need human experts. Maybe there are business analysts or people participating in UAT that have subject matter expertise in medical billing codes or things that may be more esoteric that an AI would have very a lot of difficulty really describing or knowing whether it's correct or not. But it's an accelerator, so it allows you to get more done. And I think that's what testing teams have always struggled with is there's, you'll never meet a testing team that says we have too much testing, like we're testing too much. It's never enough. There's never enough time. It always takes too long. It's not working correctly. Those are things that are really right for an AI kind of augmentation.

[00:11:53] Fitz Nowlan And I'll just add one distinction there to make, I think along the lines of what Todd is saying is that if you think of a tester today as both as the person who is in charge of knowing what to test as well as performing executing that test, then we think that there's still very much absolutely a role for someone to know what needs to be tested and to know the requirements of the application. Where we see AI providing a major augmentation, potentially replacement but more so an augmentation is on the execution side of things. I think you'll still want that expert in the loop to validate that the AI is, in fact, taking the correct action in certain edge cases or esoteric things like Todd was saying. But we see the execution is basically being something the AI can more or less take over. But you still need an expert. You still need to know what's worth testing and what the requirements of your application are.

[00:12:46] Joe Colantonio Absolutely. I love how you said it's our augmentation and accelerant. I know with DevOps, a lot of times teams tend to start failing or start doing more automation because the automation that created are the scripts aren't really accurate or they're really bulky, or they take a long time to run and to create. So how does this help that story, maybe. How does that help accelerate DevOps in delivering to the user when you have maybe AI helping with this?

[00:13:12] Todd Mcneal So I think it's similar to what the previous generation of DevOps tools did, which is I remember when I started my career in the mid 2000, it was waterfall, and you needed a dedicated infrastructure manager and dedicated ops teams. And there was a wall between developers and the ops team. And by kind of breaking that down, things move faster. But it ended up being a flywheel where because we can deploy faster, we can ship faster, and now we're shipping faster. So we need to review it faster and we need to test it faster. So it's not that you kind of rest on your laurels. So I think where this fits in is because we have those previous accelerants where we can deploy things in 15 minutes and have it in a new environment that's spun up, that has good test data. Now we need to fix the kind of the lagging part, which is how do we test everything we want to. A lot of teams right now, the way they handle that is, well, we need our tests to finish in 10 or 15 minutes if we're doing it on every pull request. We need just basically smoke tests. But what if you didn't need that? Like what if you could actually have comprehensive tests? I think, if we can deliver a testing approach which lets you get high coverage with high confidence and actually be fast, that's where you can actually, add that to the DevOps process that already is much improved over the previous way of doing things.

[00:14:46] Joe Colantonio Absolutely. And I guess another thing I've seen with a lot of the open source tooling and everything is back in the day, it was easier to track how tests fits the requirements and changes in those requirements and such. I think one good thing about working for a vendor like Smart Bear or getting acquired by Smart Bears have these other solutions as well. Like I think they have a test management system called Zephyr. So is there any now integration with AI would Reflect and Zephyr so that I can then know what the requirements are and then do the automation for you without you having to guess or kind of do the tying in this requirement ties of this test.

[00:15:22] Todd Mcneal Yeah. So one of the exciting things that we're releasing as part of Smart Bear is an integration with Zephyr, specifically Zephyr Scale, which runs within Jira and is very popular test case management tool in Jira. And with our approach to Gen AI, where you can take an English language sentence and execute it within the web browser, you could think of a test case. A manual test case is just a set of sentences which are a set of instructions for the test automation. And so what our integration is going to do is it allow all Zephyr customers to take their existing manual test cases and run them directly as automated tests using this AI approach. And so we're actually launching that later this month, and it will be freely available in beta for all Zephyr scale customers.

[00:16:10] Joe Colantonio All right. So that sounds pretty crazy. So can you walk me through what that looks like? Like I have a bunch of test in Jira. I have a Zephyr scale. What? I push a button and they're automated? How's that going to look? What's the visual for people that are listening in?

[00:16:23] Fitz Nowlan Yeah, it'll be pretty cool. So if you think about it, you've had this repository of tests that are rich and valuable on their own. But to get that value, you always would have to have a manual tester come in and perform those actions live in the browser and then mark the progress pass, fail, stuff like that. Reflect AI takes as input, as Todd said, sentences and all of your test steps in those tests definitions are sentences. So yeah, more or less you're going to get a button that says run automated test next to your test scripts in Zephyr Scale. And when you click that, we'll pass it through Reflect. And we'll use AI to augment and improve those sentences where we can. And so hopefully, from the output of that will be a well-formed, well described test script. And then we can go and execute each of those well-formed test script steps using our AI and then return the results back. Maybe a little bit of hand-holding or guiding or narrowing of your language in those test scripts as you start to use Reflect AI from Zephyr Scale. But the hope is that over time, you'll get better at describing your test steps in a way that Reflect AI can execute them automatically. And for the bulk of your tests that are well described already, Reflect AI should work on its own, and it's a seamless integration. When you're in your Zephyr scale account, you automatically have a Reflect account and the results are posted back to your Zephyr scale account. So it's kind of like bringing an execution engine to your Zephyr scale account.

[00:17:52] Joe Colantonio I know you're not salespeople. So if I have a Zephyr scale account, you say, now you get Reflect out of the box?

[00:17:57] Fitz Nowlan For the beta period, yeah. Everyone in Zephyr scale will get it for free out of the box. And over time we'll look to work in the correct pricing. They haven't sort of finalized or published that yet, but there will be a beta period where everyone who's a Zephyr scale customer will get access to Reflect AI for free.

[00:18:14] Joe Colantonio All right. So can we talk about some of the impacts of this because I think it's kind of crazy. So now I think a lot of things is you have a requirements system, you have your requirements and then you have your code base of your test. So what makes a change the requirement. You're like, oh dang. All right. This test. That test, that test. You need to update these test. Is this going to be a requirement where these rather than having to worry about that it just you go to Jira, you make the change in Zephyr scale. You press the button and it just create the new test or does it update existing test? I don't know if that makes sense, but just trying to see how it streamlines the process.

[00:18:48] Todd Mcneal No, it definitely makes sense. What Fitz was describing is when you're first adding test for the first time into Zephyr, it's sort of a process where we're guiding you through, hey, we made these improvements. Do they look good? Hey, we're going to ask you to watch the test run as we run it. If there's things missing from the test, like you didn't describe the username or password or something, you have to enter it, but it walks you through that process. Once you have your test cases automated, yes, over time, they're definitely going to change as your requirements change. But that process is basically the same as what you do today. You go into Zephyr scale and change your requirements, which is in Jira. wWhen you run it the next time, those updates are going to be passed to Reflect. There's no kind of remembering to do it over here and then doing over there. It's all seamlessly connected.

[00:19:36] Fitz Nowlan It's a bidirectional sync you could imagine. And then as soon as your requirement change, as soon as you update that test, we'll start, we'll be able to execute this.

[00:19:45] Joe Colantonio Maybe on your roadmap. You probably already thought about it. So when we talk about requirements, there could be a security requirement. It could be an accessibility requirement. It could be a performance requirement. These are all tools I think, that SmartBear has. Is this just for functional requirements we'll eventually go to like performance requirement. Like now shall the login page should load up within 10 seconds and stuff like that. Or it's just like not happening. Is it possible? Is it on the roadmap? How is that going to work?

[00:20:12] Todd Mcneal We're definitely focused on the functional requirements to start, but I think it's ideally anything that could be automated will be automated through this approach. And there are some things that I think still make sense to do in code. Like for example, if you're writing API tests, it's a little awkward to write that as English language sentences. We don't want to try to fit a square peg in a round hole. But if it's common for folks to document requirements or test cases in English today, I think that's certainly right for us to try to automate through this approach.

[00:20:54] Fitz Nowlan Yeah, and I think there is an opportunity in the future because requirements are often expressed in plain text format that suggests that the AI should be able to digest that and make heads or tails of that, and then turn that into sort of some sort of structured form and derive value from that. I think Todd's example of the API testing is a really great example, because there are some things that, even today, aren't quite expressed in plain text. They are always expressed in some sort of structured programmatic form. And so those things aren't perfect fits for the LLM, the way the generative AI, the way that anything in plain text is.

[00:21:34] Joe Colantonio So how specific must the requirements be? I mean, yes, it's used in plain English, but you could still have a very, not very clear requirement that someone writes like, how does the AI, the execution engine handle that then? Is it smart enough to know it or does it bubble it up? Hey, you need to give me more insight. How does that work?

[00:21:53] Fitz Nowlan It's a little bit like the latter there. When we come across things we can't do, we will ask you for input, but we will have a go at trying to do it ourselves before we sort of throw our hands up and go back to the user. The idea here is the straightforward instructions we can do today. There's something that's not so straightforward. We will attempt to tease it out, and we'll try to explain what process we took to tease out that action or that instruction. And then when we can't or we think we've gotten into ambiguous territory, we'll go back to. In the future, there is a sort of a second level of AI processing on the roadmap that we have, which would allow us to hopefully more automatically resolve that hyper, that extremely ambiguous requirement. And Todd, you probably maybe want to add a little bit there as well.

[00:22:41] Todd Mcneal Well, I would just say the way I like to describe it is to have it the way the specificity of the task that you're writing. The way to think about it is write a test for a manual tester that is a new employee on your team. So if you can write a test and give it to that person, or maybe give it to three new employees and they all perform the same actions, come up with the same result, then our AI should be able to execute it too. So practical example, but you can't say like go place an order in an e-commerce because each tester is going to try different products. They're not going to know, what to do if the products out of stock. Should I fail the test? But if you get to the level specificity where they all did the same thing, then you're good. That's the ideal.

[00:23:30] Joe Colantonio How does this fit into a DevOps pipeline in CI/CD is it, who handles the execution of what execute other tags. All right, here's someone checked in code. Run this suite of tests. Does it run all the tests. Is that type of communication. Does someone actually have to go to Zephyr Scale and say run these tests?

[00:23:48] Todd Mcneal So it actually, it gets at a workflow today that is a little cut off from the DevOps workflow, which was if you look at a typical customer that's doing manual testing, a lot of them will look like is you have DevOps workflow up to the point that you deploy to, say, QA or staging, and then you go to the testing team and say, okay, good to go. And it's in their hands. And so they hand off and say it's good. But that process today is literally, going in clicking buttons to say pass or failed opening up bug tickets. It's cut off from kind of this automated pipeline. But this approach actually gets it more integrated in because to start off with using this Reflects integration. Yeah. You would still go through into Zephyr scale. You would click the individual tests to run it. Once you've gotten enough tests where you can do it in a rush and cycle. There's a button there to click it to say, run it for all the tests. And then as you get more sophisticated, you can start using some of the automated capabilities in Reflect to run it automatically after a CI pipeline. So you can actually kind of think of this as a way to go from a process that's maybe totally cut off from your DevOps pipeline and start slowly integrating it into it.

[00:25:13] Joe Colantonio I'm just trying to think now of this integration. Say, someone has Zephyr scale, they're all jazzed up not to hear on this. How do they get started? Do they need to rewrite anything? Or is it like you said, they just you're good to go and press the button. Like, how can someone take advantage of this? Or maybe try the beta version with the lowest amount of overhead on their plates to do it?

[00:25:35] Todd Mcneal Okay. Yeah. So it's going to be open to all Zephyr scale customers. And so when we launch that later this month, there will be a run automated test button next to each test within Zephyr scale. And as Fitz described, when you run that, click the run button. It's basically going to walk you through the process of getting that into an automated state. So the first step is going to be looking at the tests and Reflect AI saying, okay, here are some improvements we made. Like there are some spelling mistakes over here. There's some grammatical mistakes over here. This test that actually is a couple test steps. So we split it out. You can review it but the AI does that all for you. You just review it and say okay. And then the next phase is okay. Now we're going to start to run it. Let's watch it. If we run into any issues like where maybe you didn't specify, this particular action. We don't know what to do. We're going to ask you to describe that action or do that action. And then we'll transcribe it as English language. It's designed to walk you through that process. So even if you're a tester that's really never been comfortable using automated tools, it's going to feel like a tool that was built for you, rather than a tool built for someone who's really comfortable with coding or as a developer.

[00:26:54] Joe Colantonio Has they know where to run?

[00:26:55] Todd Mcneal That's one of the things that you describe is before you click run, it asks you to the URL that you want to test in. So you give that that's the one piece of missing information that's you need to give. That's not going to be in the test.

[00:27:08] Joe Colantonio Now if it fails, does it give you any extra insights on does it mock anything in Zephyr scale to say, hey, this failed or did these requirements weren't fulfilled or anything like that?

[00:27:18] Todd Mcneal Yeah. So when you're first building it, it's going to give you feedback. And that's part of where we use our vision AI. So we're actually describe if we hit an error like say right off the bat we'll describe not only hey it's a failure, but we're actually use vision to describe. Here's why we couldn't execute this step. And then later on as have your tests automated and you're running them in a cycle, a test cycle, it's going to automatically mark them as pass fails in the test cycle. So you don't have to do that manually.

[00:27:46] Joe Colantonio So once again, so I guess eventually Reflect would just be like the execution engine and Zephyr scale would be like the dashboard that people would log in and do everything. Is that how it works or would ever be someone would just have Reflect still, my guess it's too early to know. But what's the vision long term for this?

[00:28:03] Fitz Nowlan Yeah, I would say I see them as still complimentary even in the long term. You may have customers who, for whatever reason, just don't have the desire, I guess, to organize all of their tests in a test case management solution. Or maybe they're not on Jira, Zephyr scale is exclusively in Jira. So, I still see maybe a market for Reflect long term. But that said, if you could have sort of the organization and the reporting and the tight integration with Jira of Zephyr scale and the execution engine of Reflect, it does kind of make sense long term as a married couple, you could think of it. So Reflect as previously has long integrated with the test case management tools and has allowed you to basically export all of your Reflect tests into your test case management tool, lik Zephyr scale. So to the extent that that's valuable to you, I think that's going to be valuable to most customers. I think it makes sense to pair the two up, but we don't have long term plans to require you to use test case management tools at Reflect. It still Reflect standalone still works fine.

[00:29:05] Todd Mcneal Yeah. And that's one of the things that we're still early at Smart Bear. But one of the things I've learned is that their approach, their suite of tools is that kind of allow you to mix and match things. It doesn't require you to go all in, and maybe have to buy tools that you're never going to use, which I think is nice because our experience has been everyone has slightly different processes. And we want to try to be where go where our customers are, not force them to do a different process.

[00:29:37] Joe Colantonio Love it. I've used Smart Bear for years, years I mean years. See all the great. So they've been a great company. So I think it's a great great match. Anything down the road I mean you guys seem like visionaries because you came up with the solution. Seem at the right time, at the right place and it took off. Anything you see on the horizon than other maybe kind of transformative technologies on coming up that people need to know about or anything more about AI people need to know about or maybe misunderstand?

[00:30:04] Fitz Nowlan I think Todd probably has some ideas. I'll let him answer. I'll just add one little nugget.

[00:30:09] Joe Colantonio Okay.

[00:30:09] Fitz Nowlan T hat I think just to reiterate what I said earlier, that I think the killer apps for AI are when the AI does something for you, it's not when it just provides insights, I think, sorry, I should narrow that a little bit. The killer app for software engineering, for DevOps, for tech, software quality is when the AI does something for you. It's not just insights. Obviously, in the more creative industries, the AI can just provide insights or generate content that's value creating. But I think value creation occurs in software engineering and DevOps and test automation when the AI is doing something. So you really have to find actions or functions that the human is doing today that the AI can do for them to free them up, to do higher order stuff. That's kind of my nugget.

[00:30:54] Todd Mcneal Yeah, I would say the two things that I think are really interesting is other places within your existing testing process that could be have this AI augmentation. So if you think all the way from requirements to finding all the way to things that you would do after deployment. If you think about those and each individual action, there's probably some AI augmented actions that can make that particular part better. And the exciting thing about that is if you're an organization, you can look at the weakest part for your organization in that process and try to get AI adopted for just that, instead of trying to do the whole entire thing and change your process. The second thing that I think is really interesting is vision AI. I think that's going to change a lot. It's going to allow things that have not been possible technically and make that possible. The universal interface could change a lot.

[00:31:54] Joe Colantonio Absolutely. Yeah. I've been messing around with something for music called suno.ai

[00:31:58] Todd Mcneal I heard about that.

[00:32:00] Joe Colantonio It creates crazy. I mean, it's just nuts what it can do. So just applying it to creative things like that. Yeah. I just see this being a real thing. I think a lot of people are kind of overhyping it or under hyping it. And so I just think they shouldn't under hype at all. But, anyway, we end of thought, before we go, any parting words of wisdom you want to leave The Guild? And what's the best way to find or learn more about Reflect with this Zephyr scale solution coming up?

[00:32:26] Todd Mcneal Parting words. No, I just would just like to thank you for again for having us on, Joe. It's always been a pleasure for me to speak with you.

[00:32:33] Joe Colantonio Same.

[00:32:34] Todd Mcneal And then to find Reflect. We're at Reflect.run. That's our URL on the web. And smart bear line of products, the smartbear.com. Zephyr scale, you can find that in the Atlassian marketplace within Jira. And yeah, we'll be releasing this to all Zephyr scale customers in beta by the end of April.

[00:32:55] Joe Colantonio Nice. Fitz, any parting words of wisdom?

[00:32:59] Fitz Nowlan No. Just to thank you. And yeah everyone look out for that blue run automated test button in your Zephyr scale in coming weeks. We're excited.

[00:33:08] Joe Colantonio Very cool. Absolutely.

[00:33:10] Fitz Nowlan To get a lot of users with that.

[00:33:13] Joe Colantonio Sweet. Well thank you guys. Appreciate it. So guys I know you would go round probably showcasing this a different conferences, is there anything coming up that if people are listening that they could check out maybe in person and actually see it because you did a demo on the webinar and then everyone's like, okay, now I see it. I think there's some you need to see to believe. So anyway, are you going to be demonstrating this awesomeness?

[00:33:32] Todd Mcneal Yeah, we'll be at the Atlassian Team Conference at the end of April. So if anybody is going there on site, they also have a virtual conference component to it too. Feel free to reach out. We're happy to talk to customers or anyone curious about how our AI works.

[00:33:51] Remember, latency is the silent killer of your app. Don't rely on frustrated user feedback. You can know exactly what's happening and how to fix it with BugSnag from SmartBear. See it for yourself. Go to BugSnag.com and try it for free. No credit card is required. Check it out. Let me know what you think.

[00:34:12] And for links of everything of value we covered in this DevOps Toolchain Show. Head on over to Testguild.com/p143. And while you're there make sure to click on the Smart Bear link and learn all about Smart Bear's awesome solutions to give you the visibility you need to deliver great software that's Smartbear.com. That's it for this episode of the DevOps Toolchain Show. I'm Joe. My mission is to help you succeed in creating end-to-end full-stack DevOps Toolchain Awesomeness. As always, test everything and keep the good. Cheers.

[00:35:15] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Brian Vallelunga TestGuild DevOps Toolchain

Centralized Secrets Management Without the Chaos with Brian Vallelunga

Posted on 09/25/2024

About this DevOps Toolchain Episode: Today, we're speaking with Brian Vallelunga, the founder ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Testing Castlevania, Playwright to Selenium Migration and More TGNS136

Posted on 09/23/2024

About This Episode: What game can teach testers to find edge cases and ...

Boris Arapovic TestGuild Automation Feature

Why Security Testing is an important skill for a QEs with Boris Arapovic

Posted on 09/22/2024

About This Episode: In this episode, we discuss what QE should know about ...