About This Episode:
Today, we will explore Generative AI (Gen AI) in QA and testing, featuring insights from our esteemed guest, Fitz Nowlan a VP, AI and Architecture at SmartBear.
Try Zephyr Scale Now https://testguild.me/Zephyr
In this episode, we explore how Gen AI impacts testing things like automating routine tasks and freeing human testers to focus on higher-level strategic decision-making. Fitz and I discuss the critical role of human judgment in evaluating performance metrics, the importance of understanding application requirements, and the emerging trend of agentic workflows in AI by 2025.
Fitz shares his vision for the future of software creation and the evolving role of testers in what he describes as a “golden age” for QA. Whether you're a tester, developer, or simply curious about the future of AI in software testing, this episode is packed with valuable insights and practical advice.
Zephyr Scale – Test Management for Jira
Deliver better software, faster with the Jira-native test management solution that performs at scale.
Try Zephyr Scale Now https://testguild.me/Zephyr
About Fitz Nowlan
Fitz is a software engineer and founder. He earned his PhD in CS from Yale University, and most recently leads architecture and AI integrations for B2B SaaS products at SmartBear.
Connect with Fitz Nowlan
-
- Company: www.smartbear
- LinkedIn: www.fitz-nowlan
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:00] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
[00:00:34] Joe Colantonio Hey, welcome to another episode of the Test Guild Automation Podcast. If you want to learn more about how QA and Gen AI can merge together to help you with testing, especially in the future, you're in for a treat because we have Fitz joining us, who is a software engineer and founder. He earned a Ph.D. and CS from Yale University. And most recently, he leads architecture and AI integrations, or as you know, SmartBear. So really excited to have him back on the show. It's been a few months in April, in episode 142. So great to have him on back because things are always changing with the AI. To get a new perspective on where we're heading, especially as we head into the new year. You don't want to miss it. Check it out.
[00:01:13] Joe Colantonio Hey Fitz, welcome back to The Guild.
[00:01:17] Fitz Nowlan Hey, Joe, Thanks so much for having me. It's great to be back. It's been about half a year now.
[00:01:21] Joe Colantonio Half a year seems like forever in AI years. What's new and great? Because it seems like Gen AI now seems like it's been around forever and everyone's talking about it. Why this topic at this point?
[00:01:32] Fitz Nowlan It's a good question. I think, as you said, things are changing a ton. It feels like everyone, the initial wave has kind of crested. Everyone realizes kind of the baseline power of Gen AI, and now I think everyone's gotten past the basic chatbot experience and they're looking for a little more functionality, a little more actual value creation from their GenAI. And so that's where I think, we're starting to enter a new phase of Gen AI where we're actually doing things for users instead of just sort of making it easier to find information that makes sense kind of beyond the search engine experience.
[00:02:05] Joe Colantonio All right. So I've been hearing a lot about Gentic AI, is that what we're talking about? Is this something different?
[00:02:09] Fitz Nowlan Yeah, it kind of not in that exact term, but I think that's kind of all these experiences that you'll now see in 2025 from vendors will be more Gentic in spirit. They may not call it that, but they're basically trying to do the functions for you that you would have done previously rather than just exposing information to you.
[00:02:27] Joe Colantonio You have a lot of hands on experience of what you're working with. Just curios to know how you see Gen AI transforming the role of QA. I think in the pre-show you talk about how it help going from tactical to strategic.
[00:02:39] Fitz Nowlan Yeah, exactly, Joe. So if you think about today in your QA process, historically it's made up of a lot of tactical steps, a lot of tasks that you perform, you check a box off a list and then you record your results and then you say, Go, no, go. GenAI is going to be able to turn a lot of those tasks into automatable tasks, whereas before the only thing that was automatable was what you could actually get an engineer to code up for you in a script. Now a human can get a script out of that, can get an automated task or process out of a task they used to do themselves using Gen AI. And so that moves them from a tactical sort of operator to a strategic thinker. And that's where I think the true value is created with Gen AI in QA.
[00:03:19] Joe Colantonio All right. So I just got off a webinar this morning with these two AI automation, Uber thought leaders. And the idea was brought up that it's almost like we're entering the golden age of testers because back in my day you had to be a tester, then you had to learn automation, become a developer. Now it sounds like A.I. is replacing automation in the sense and now we need to become testers again. Is that what you're seeing? Is this what it sounds like to me?
[00:03:44] Fitz Nowlan Yeah, exactly. The strategic thinker is basically the expert in your application. It's the expert in what the application should do. And when the quality bar is met, that's a much wider pool of people than just the people who could code up automation before or the people who were narrowly assigned to do these repetitive tasks. It's an operating at a higher level of abstraction, which I think is a topic we talked about last time when I was on the call. This is our firm belief that Gen AI raises the bar on where the human operates in the QA process.
[00:04:15] Joe Colantonio And I'm assuming a lot of testers don't like to hear those because they're pretty much secure in what they do. They know the tools, they know how to code now. What's the main difference between like traditional code based testing that everyone seems to be that they finally embrace? But now they're like, now I have to go AI driven testing? How does that even differ in like what do I need to look out for if I do go that approach?
[00:04:36] Fitz Nowlan Yeah, it's a really good question. I think it shouldn't be something scary for someone in QA, in the big picture basically broadens the pool of the things that they can now do. Previously you had a narrow set of tasks that could have been written automated with the script and code, and then also a narrow pool of people who could do that automation for you. The pool has broadened both for what is automatable, because Gen AI can automate some of these repetitive tasks by controlling you or by giving you sort of higher level scripts that you can work in that don't actually require you to code to write a python code or type of code or Java code. And in the pool of people who can actually complete that are now the people who are the experts in your application. It's again, like a higher level of abstraction. If you are familiar with the application and you know how it should perform, you are now capable of testing it. So it shouldn't be scary. It actually is an empowering thing. And I also think that it's all the more important that a human remains in the loop because the humans are the final decision makers for what the application should do and whether it's meeting the quality bar. They set that quality bar. In other words, here's a great example, Gen AI could tell you that your application performs generally under 300 milliseconds of latency. No, it returns a result within 300 milliseconds. But GenAI doesn't know if that's a suitable metric for your application and some applications that might matter in other applications. It might not. The humans are the final decision makers in that sense.
[00:05:57] Joe Colantonio Yeah, 100% agree. But it also almost sounds like when I first started in the late 90s, they had keyword driven approaches to automation and then they had like BDD now and it was all in the spirit of let's get technical people involved. And so everyone can contribute to QA and testing. Never happened. How is Gen AI different? Like if someone's is listening to this they like, great! Get my whole team know how to do this because now they don't need to be an Uber coder. And in the end, it's like, how do you get people then to actually embrace it, I guess?
[00:06:28] Fitz Nowlan Yeah, I 100% agree with that point that it never really happened, that everyone got involved. I think what's really different about this is Gen AI takes us back to the source of truth for an application. And the source of truth for an application is the user story. It's the requirements. It's something human authored for what should be built. And what's the purpose of the application? That expression, that document that those wireframes, whatever those artifacts that are now produced for the applications we want to build. Those are your inputs to Gen AI on like the code building side of things. And they also serve as your source of truth for the testing on the QA side effects. So in other words, it's not so much that we have to get everybody involved, it's that everyone is going to be involved by virtue of the power of Gen AI operating from the source of truth artifacts that we have just for software building. If that makes sense. We're kind of backdooring our way into Gen AI impacting QA and software building by virtue of the fact that it can speak to our source of truth artifacts, it can speak that same language.
[00:07:37] Joe Colantonio All right. So now that it's backdooring its way in, how can teams prepare for. I think it's actually it's going to be inevitable that people are going to have to embrace some sort of A.I.. How do we start working it into maybe our current workflows? I know I talked to a lot of testers. The playing around with a lot of these A.I. tools is anything they need to know in order to be able to get started to prime the workflows to really embrace the Gen AI approach?
[00:08:00] Fitz Nowlan Yeah. For me, I think the best way to think about this is to divide your work into what is an automatable task or a repetitive tasks, sort of a procedure or a function that you're always doing. And what is a logical or creative decision that you're making? On the one hand, Gen AI, you think, Gen AI can create tons of content and it certainly can, but there's still something fundamentally different, at least today, between the human created content or the human decision that being made about whether the quality bar is met or about whether the application is performing to spec or why we should build this application in the first place or who we want to serve. Those are all human decisions. You want to separate your work, whether it's on the software creation side or on the software testing side, along the lines of what does not really require me, what's an automatable task? What's a procedure that can be performed versus where am I it creating value? Where am I making a difference as a human decision maker? And then obviously you want to maximize your impact on the human side of things and prepare for the future where those automatable tasks, those procedures can be performed by AI.
[00:09:07] Joe Colantonio All right. So it sounds just like automation. You want to find out repetitive tasks and then I'll replace those with the AI. For the critical logic decision making. Can I create like a JoeLLM that I train on my decision making that then would act on the AI in what it's doing?
[00:09:22] Fitz Nowlan Yes.
[00:09:23] Joe Colantonio At some point, is that far fetched?
[00:09:25] Fitz Nowlan No, no, no. I think that's reasonable. I think at least from everything I've seen, you still want a human signing off on the final decisions that you could make those decisions, you could prep those decisions, you could seed an idea for a decision, but you still want human signoff. I think for most of your tasks. Solely due to hallucinations at this point. If Gen AI won't solve that based on it's like mathematical construction. It's still always going to be sampling from a distribution. And so that's not going to change. I think you still will always need some kind of a human in the loop for that final signoff.
[00:10:00] Joe Colantonio You work for a vendor. I'm sure you talk to a lot of companies upfront. How many people are thinking, AI can replace my whole developing testing team now with the A.I?
[00:10:11] Fitz Nowlan We don't hear that a lot. What we actually hear is we can go faster. We can ship faster with higher quality. And I think obviously I'm an engineer by training on the software creation side, we're under just as much of a quote unquote threat from Gen AI as on the testing side, maybe even more on the creation side. But on the testing side, we have the benefit of all the terrible code that's about to be written by Gen AI for the next five years, right. Gen AI is going to infiltrate on the code creation side. It's going to produce a whole massive amount of bugs and opportunity for vendors like SmartBear to come in and test the quality of that code and give people confidence in what they've written. I don't think it goes away. I think it the landscape changes a bit, but doesn't go away.
[00:10:54] Joe Colantonio All right. So it seems like, as we mentioned, a lot of companies that aren't necessarily testing companies are coming in with these type of solutions that can automate tasks, not necessarily test, but could be used for tasks. I think anthropic just came up with one called computer vision. And obviously they have like millions and millions of dollars, billions of dollars probably behind them. Open AI is supposed to come up with one later on. How do I know what tool to get behind, especially when these larger companies eventually almost seem like they could be monopolies that are going to have, able to do all the features that a lot of other companies have right now that are testing companies. I don't know if that makes sense.
[00:11:30] Fitz Nowlan But yeah, it's a really good question. The computer use tool from Anthropic, it's both like straightforward in one sense. Humans interact with computers in a finite number of inputs. There's a mouse, there's a keyboard, and then there's a screen that we read and react to. If you could just manipulate those on those three axes, you could certainly look like a user of a computer. So on the one hand, it's sort of straightforward. On the other hand, it's obviously a massive amount of engineering and billions and billions of dollars, like you said, have gone into making that actually look like a real human using a computer. So very, very impressive in that sense in terms of like getting behind a vendor. I'm a big fan that over time I do think the open source models will eventually catch up. If you look at how far back they were two years ago and how far back they are today, they're closer than ever. And at some point we have this question of do we run out of training data? Do we run out of like an Open AI crawl enough data and the whole Internet to get a better model. So I think we're at a point where I think the other paid models are still ahead. The paid vendors that the computer use from anthropic, the new version from AI, they're going to be ahead. But long term, I actually don't think you need to pick a vendor. I think the open source models will proliferate and you'll have your choice in the future of what's the right model for the right job.
[00:12:39] Joe Colantonio All right. So if that's the case, if you have a vendor tool, as you do, do you then decide behind the scenes what model are you going to use? Or like how would a vendor then still be relevant? I know they'll be relevant, but I'm just curious and like how does a vendor exist in that type of scenario?
[00:12:54] Fitz Nowlan Yeah, good question. We on SmartBear, we don't actually train our own models there. We use the foundation models and then we hone them and basically fine tune them into our use cases. And so the vendors are always going to be relevant because the users not just buying the model, they're buying the experience that they get from the vendor with that model that expertise of using the model for a specific purpose with all the guardrails and all of the reporting and the notifications and the user experience, all that comes together for an application experience that customers will want to pay for. I think vendors remain very relevant into the future, even as the models get very powerful because you still ultimately, when you drive a car, for example, you have a familiar set of controls. When you fly a plane, you have a different set of controls. When you do a certain task, you want to fit into the look and feel that you're used to. There's simply a better look and feel for a different task than another. And so that always remains. A vendors remain relevant as these models get very good. But my point is that I don't think the foundation models necessarily will be ahead of the open source models forever. That's good for vendors and it's good for customers because vendors can choose a cheaper model and customers can get those benefit passed onto that.
[00:14:03] Joe Colantonio Absolutely. I've been kind of surprised at the adoption of a lot of these technologies. And I know like with things like self-driving cars, well, they have laws. You can't just let it to make the decisions for you. Do you foresee that being an issue and you have to collect situations like laws being passed so you can't have GenAI making all this decision making for you? You need a person in the middle to like audit it or be accountable. I don't know if I mean, those are the type of things that we are in a head or something we need to get ahead of before laws get into place.
[00:14:33] Fitz Nowlan Yeah, it's a good question. I personally just maybe like the just kind of little litigiousness that is personally the litigiousness of the society today makes me think that there will be laws for everything in just a couple years, like the law landscape will change rapidly and dramatically going forward. So yeah, there is a sense you have to keep in touch, stay in touch with that. I think in terms of like how you bring AI as a vendor to market, I think that there are there are some pillars that you can establish that let your customer know that you've thought about the right way to bring AI into the market and you're communicating with them about when AI is in use. For us, we have kind of these pillars around transparency and accountability. Tell your customer when you're using AI for something, we have data privacy and security. Tell them if you're training a model or not. We don't train models at SmartBear, but we take great care with our customers data. Another is this idea of ongoing assessments that we should have. We have a golden set, basically a regression set that we test our own AI integrations on to make sure that when we move to a new model. We're still delivering the same quality of service to our customers. And then the last piece of glass pillar of our framework, is it legal compliance, which you kind of alluded to. We have to stay on top of the changing legal landscape for AI think your example of self-driving cars is the perfect one. We may see in the future cases where you can't use AI without telling your customer exactly how you're using it, and we would obviously welcome that. That's a great development.
[00:15:56] Joe Colantonio Absolutely. I'm going back to where we talked about earlier, how Gen AI can do the repetitive task and you need a human in the loop. But are couldn't AI just train a model like once again, I said like Joe AI, but I could point it to all the documents from my application, all the requirements, all the discussions from my team and say, okay, it probably can make better decisions than me. All I need to do is prompt it with the right questions. Is that being realistic?
[00:16:21] Fitz Nowlan Yes. So you're saying on like the software creation side of things.
[00:16:24] Joe Colantonio Yeah, like minimizing the human even more because I'm not only a domain expert anymore, the thing I could training on all the documents that I've read there's probably a better domain expert than I'm going to be.
[00:16:36] Fitz Nowlan Yeah, it's certainly possible. I don't know when. Maybe it's not even. Not a question of if. It's just a question of when. But yeah, I think we kind of see a future where software is going to be created from your application requirements, from your wireframes, from your user stories, from your feedback, from your customers, from your product marketing research sessions, all that content comes together. And then Gen AI can already generate valid code for, I should say compilable code. And so I do think that will change rapidly change the landscape for software development. However, what we also are seeing is that Gen AI still struggles with refactoring with the duplication of code throughout a large code base. And so what you may end up having is the same function repeated multiple times throughout your code base, which is poorly factored, which basically like exponentially increases the maintenance burden, which then also leads to bugs. And so that's why we're happy to be on the software testing side of things using Gen AI as opposed to the software development side because we think we're coming up on a wave of AI authored code that will need really good QA.
[00:17:40] Joe Colantonio Absolutely I always get asked also how can I upskill myself or my team members to be AI proficient? I know there's ChatGPT like anything specific other than that?
[00:17:49] Fitz Nowlan Yeah, there's not a silver bullet for this. I think you have to be naturally curious and do your homework. Stay abreast of all the latest changes in Gen AI, for example, if you see Anthropic roll out computer use, download and use it and go become familiar with it and the concepts of it. I don't think you need to know how LLMs work at a fundamental level, but you should know, for example, properties or facts about them, such as they're statistically sampling tokens from basically a world of all possible tokens. And you should have a little bit of an understanding of how they're producing content, stuff like that. Apart from that, I think you get back to if you're on the QA side of things, knowing what is a task where you're a differentiator as a human in the loop and where you're not. For example, if you're doing the same seven clicks, a sequence of same seven clicks every day, when you're testing something, that's probably something that could be automated in the future. That's not something where you're creating value.
[00:18:38] Joe Colantonio Is there a place you would never want to use Gen AI.
[00:18:40] Fitz Nowlan Oof! Good question. I think definitely there are things today that are not such a burden and they're important enough that I don't even want a smidgen of risk, for example, like paying my bills. I don't have that many bills. I don't mind going in and queuing up the payment once a month. Over time, right as I gets better. And maybe if that amount of time that I have to sink paying bills increases dramatically, maybe then the trade off starts to fall into favor of using AI. But for me, the rule is kind of how much of an inconvenience is it multiplied by the risk of did Gen AI hallucinating or making a mistake? And that's kind of there still are things that fall under that threshold for me.
[00:19:18] Joe Colantonio Awesome. Last we spoke, you had been acquired by SmartBear and you started integrate your Reflect with other product lines. It sounded like with SmartBear. I'm just curious to know, I don't know if you could talk to us. I think we said something about Zephyr, how you integrating the no code. Automation to Zephyr. Do you have like, how did that go? Any results from that? Any lessons learned?
[00:19:38] Fitz Nowlan Absolutely. Yeah. I'm glad you asked. We are. We're totally live. We got live in July. And when I say live, what I mean is Zephyr Scale, which is a Smartbear's product that's in the Atlassian marketplace or end to end test case management now has the ability to take your plain text descriptions, you know, your test cases written in text and execute them in the Reflect automation test execution engine. And so that's released. That was in July. And so all customers in Zephyr scale now have access to Reflect so they can take their test and they can run their tests in an automated fashion. They can also use record and play if they don't want to use A.I descript or plain text description to use our AI. But that's live and we have hundreds of customers using that every month and it continues to grow. Zephyr Scale itself is sort of a growing product, so that continues to grow the exposure to our automation, but it's going really well. And then on the execution side of things, on the Reflect side of things, we're rolling out the ability to test native mobile applications using AI instructions. If you think about like today, the state of the world for testing native mobile applications is you connect to a grid, you have a device, physically residing somewhere and you point and you click and you test to make sure things work. You'll now be able to express those tests in instructions. Open the application, enter a new record, click submit, log in, verify that the application does X, Y, Z. And so that's going into beta, I think actually just following the beta last week. With 1 or 2 customers, we're growing it, growing the beta throughout the rest of this year and then into next year. So it's been going really well. The fit has been really good. I think the blend of Reflect automation and AI with Smart Bears distribution, the Zephyr scale, customer base has been great and we're excited for the future.
[00:21:19] Joe Colantonio Yeah, really excited. It sounds really cool. I know it's hard for people to visualize this when it's just a talking head audio. I have a link down below so they can actually see like a demo video of it in action to get a feel for it. All right. I know you're always working on new things. I know you just mentioned another new release that just came out. But now we're about to enter 2025. Any predictions for 2025 testing wise or AI wise? Last time we did, your partner there, he mentioned Multimodal AI before it came out, and then it became a thing in that year. So I'm looking for one of those type of insights that you might have.
[00:21:52] Fitz Nowlan I think the big prediction for 2025 is that AI will, as alluded to before, it'll go from that level of exposing your data and this chat like interface to actually doing things before you so workflows. And so you mentioned agent workflows and I think definitely we'll see that whether people call it explicitly that some will, some won't, we basically are doing a Gentic workflow on the back end, but we don't necessarily call it that. But these specialized workflows driven by AI, potentially constructed by AI and then reported on by AI, all overseen by a human operator or a final decision maker. In other words, it's not a revolutionary prediction. I guess what I'm saying is I think it'll just it'll reach adolescence, maybe. It'll come out of that early stage of this is possible. There's this capability here and I can talk about your data to now I can do things for you and it comes back to you like a kid. And this is like, Hey, how did I do? Is this good? Was this bad? What should I change? How could I improve? I think that's kind of the next stage. It's more of an evolution instead of a revolution, I think.
[00:22:54] Joe Colantonio Why are you hesitant to call it a Gentic? Is it because of Gentic a little different than the workflow that you foresee?
[00:23:00] Fitz Nowlan No, no, no. Just sorry. It's purely just not really hesitating to call it that. I guess just since I've been doing this for, I guess a couple of years, I've been doing it without that term on it. That's all.
[00:23:11] Joe Colantonio Gotcha. Right, Right.
[00:23:12] Fitz Nowlan So really, no, no reason that it's not. I think you're ahead of me in the terminology on a couple month by like a half a year or whatever. No, no, 100%. It's Gentic. I just don't know if like in other words, some vendors will expose their agents as a things for their customers to control, and then other vendors will use the concept of a Gentic workflow internally, but they won't necessarily sell to their customer a Gentic workflow. And that's all I think we're in that latter category. We're structuring our workflows in an a Gentic way, but our customer won't know that we're doing that. And they wouldn't think to themselves, this is a Gentic AI integration, This is just an AI powered feature. Whereas, you do see some vendors where they say like, you can use our agents to do this and this and this. And so I think I'm just sort of drawing a distinction between the public perception to the customer.
[00:24:01] Joe Colantonio Makes sense. Okay Fitz, before we go. And then one piece of actionable advice you can give to someone to help them with their A.I automation testing efforts. And what's the best way to find or contact you?
[00:24:10] Fitz Nowlan The best advice is know exactly what your application is supposed to do. The number one thing that it doesn't matter whether you're using AI or not. If you don't know exactly what your application does and who you're serving, who your customer is, then you're not going to have success no matter what you do. But if you know those things, then you will be able to deploy AI in a really useful, powerful, impactful way because you will stay focused, laser focused on delivering for your customers. I think when you're building applications, when you're testing applications, the number one thing is what should my application be doing and who am I serving? If you can remember that, I think you'll be successful with whatever vendor or tool that you use terms of how to get in touch. I'm Fitz Nowlan at SmartBear, you can do Fitznowlan@smartbear.com. Don't forget to shoot me an email.
[00:24:57] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a455. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:25:28] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:26:17] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.