About this DevOps Toolchain Episode:
In this episode of the DevOps Toolchain podcast, we dive deep into the evolving intersection of AI, IoT, and embedded systems with special guest Hariharan Ragothaman who's a seasoned technologist and DevSecOps expert.
Try out Insight Hub free for 14 days now: https://testguild.me/insighthub
Hariharan shares how he went from programming in BASIC as a kid to leading cutting-edge AI server validation today. We explore the mindset shifts needed when moving from embedded systems to cloud-native architectures, and why having a security-first approach isn’t just optional anymore — it's essential.
We also discuss:
✅ The growing role of AI in embedded systems and IoT — and what that means for testers and engineers.
✅ Practical strategies for building a security mindset (even if you don’t think of yourself as a “security person”).
✅ Favorite tools and techniques for shifting security left, including real-world examples and open-source tips.
✅ The balance between technical depth and leadership skills in an AI-powered future.
✅ Hariharan’s personal approach to staying ahead of the curve, from continuous learning habits to favorite books and tools.
Whether you're deep in DevSecOps, testing embedded devices, or just curious about where AI and IoT are taking us next, this episode is packed with actionable advice and fresh perspectives to help you stay ahead.
TestGuild DevOps Toolchain Exclusive Sponsor
SmartBear Insight Hub: Get real-time data on real-user experiences – really.
Latency is the silent killer of apps. It’s frustrating for the user, and under the radar for you. Plus, it’s easily overlooked by standard error monitoring alone.
Insight Hub gives you the frontend to backend visibility you need to detect and report your app’s performance in real time. Rapidly identify lags, get the context to fix them, and deliver great customer experiences.
Try out Insight Hub free for 14 days now: https://testguild.me/insighthub. No credit card required.
About Hariharan Ragothaman
Hariharan Ragothaman is a seasoned technologist currently leading AI server validation at AMD, where he architects next-generation data center solutions. With over a decade of experience spanning embedded systems at Bose Corporation to cloud infrastructure at athenahealth, Hariharan has earned recognition including induction into athenahealth's Hall of Fame and multiple CEO awards. He's published extensively on AI-powered cybersecurity and DevSecOps, serves as a judge at major hackathons from MIT to UC Berkeley, and holds senior memberships in IEEE and ISA. When he's not building the future of computing infrastructure, you'll find him mentoring the next generation of engineers and driving innovation at the intersection of AI, security, and scalable systems.
Connect with Hariharan Ragothaman
- Company: www.hariharanragothaman.github.io
- LinkedIn: www.hariharanragothaman
Rate and Review TestGuild DevOps Toolchain Podcast
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:00] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.
[00:00:19] Hey, today we'll be talking about Hariharan all about embedded system, security, AI, DevOps, all the things. If you don't know, you're in luck. He's a seasoned technologist currently leading AI server validation at his current company, where he architects next generation data center solutions. He has over a decade of experience spanning embedded systems to cloud infrastructure, and he's earned recognition, including induction into the Athena Health Hall of Fame and multiple CEO awards. He's published extensively on AI powered cybersecurity and DevSecOps and he's served as a judge on major hackathons from MIT to UC Berkeley. He's done it all. You don't want to miss it. Check it out.
[00:00:56] Hey, before we get into this episode, I want to quickly talk about the silent killer of most DevOps efforts. That is poor user experience. If your app is slow, it's worse than your typical bug. It's frustrating. And in my experience, and many others I talked to on this podcast, frustrated users don't last long, but since slow performance is a sudden, it's hard for standard error monitoring tools to catch. That's why I really dig SmartBear is Insight Hub. It's an all in one observability solution that offers front end performance monitoring and distributed tracing. Your developers can easily detect, fix, and prevent performance bottlenecks before it affects your users. Sounds cool, right? Don't rely anymore on frustrated user feedback, but, I always say try it for yourself. Go to smartbear.com or use our special link down below and try it for free. No credit card required.
[00:01:53] Joe Colantonio Hey, welcome to The Guild.
[00:01:57] Hariharan Ragothaman Thanks, Joe. Thanks for the long introduction.
[00:01:59] Joe Colantonio Absolutely. I guess before we get into it, I'm always curious to know, how did you get into technology or DevOps?
[00:02:05] Hariharan Ragothaman That's a fairly interesting question. I think I got into programming when I was in like, I think sixth grade. I think my first language was basic. I think it stood for beginners, all-purpose, symbolic instruction code, if I remember it right. So I started doing that, and then eventually slowly leaned into C++. And I think when I was in my 12th grade, I think that was the time when I realized that I had an initial interest in robotics. I realized that I was into, I had some leaning towards learning electronics. And then which naturally progressed to doing a bachelor's in electronics. That's what I started. And that's how I started getting interested in technology. I was always passionate about interfacing hardware and software. And the time when I made my first robot and I saw it move, I was like, oh, Okay, this is really fun. I thought I should pursue my interests in that domain. But how I eventually got into DevSecOps or AI, I think that's a separate story by itself. But to answer your initial question, that's how I got into the technology space.
[00:03:04] Joe Colantonio All right, hold on. I mean, when I was 6, I was probably still playing outside with sticks and rocks. How did you get into basics? Was your parents into programming? Was your school early?
[00:03:14] Hariharan Ragothaman It was my school. Yeah, I mean, I also definitely did play with rock, stone, stick, some of that stuff. But I think it was my school that introduced me to programming at a very early age. And that's probably helped me out eventually.
[00:03:28] Joe Colantonio So I know you've worked across many embedded systems at different companies, now cloud native validation. What's the biggest mind shift that you had to go through from going from embedded to cloud native? And was it a big shift or was there anything you had learned to go from one to the other?
[00:03:43] Hariharan Ragothaman I started off my career by doing a lot of testing in the embedded side, which I think helped me to mentally map my fundamentals that I had learned in school and college to actually apply it in a production environment. That was more than, that was also really intriguing. At the same time, it was also, I was able to see the code in action. You know what I'm saying? With regard to the mindset, I felt you need to be really inquisitive and have a lot persistence when you're dealing with embedded systems, right? There's a lot of like debugging. That needs to be in place. And depending on the environment of the revenue, you probably also have to spend long hours even figuring out something that may look completely trivial. In one of my projects that I had done in the embedded systems world, I had to integrate Alexa and Google Voice APIs to my Embedding Systems. That was when I realized I had to do a lot of upscaling in the cloud world. I definitely started doing a lot of certifications that AWS provided. And I started developing a systems level approach in terms of how I would architect systems and put things in place, rather than trying to zero in on a particular problem and making it perfect. Imagine I almost zoomed out and looked at how I will build systems. And I think that was a technical shift in the mindset in terms when I had to deal with cloud-native architectures. I think I always go back to that. I don't remember that full quote. It's almost like there was a fantastic quote that I read online which says that, you don't rise up to the level of your goals, but you fall down to the levels of your systems, or something like that. I think I've always enjoyed that quote. I think it was more of big picture thinking and how I arranged all the various lego pieces to get something done. I think that was a big technical shift I had to go through.
[00:05:25] Joe Colantonio With AI now entering AI machine learning, do you see embedded system testing a bigger need for that?
[00:05:32] Hariharan Ragothaman I think with the influx of AI, I think there's going to be a lot more prominence in embedded systems in security, given the rapid adoption of AI. But how it actually trickles down into those sectors, I would have to be analyzed on a case-to-case basis. Definitely, I mean, in the embedded systems world, we actively use different sensors for different purposes. So there's definitely going to be a lot of value. You need to really care about security, but I'm not really like getting the actual question that you asked in terms of like, what's the takeaway should be giving? Like, should I be talking about how it affects that sector or like?
[00:06:13] Joe Colantonio No, I'm always looking for trends. I don't know if someone's listening and you're like, hey, here's an opportunity. AI machine learning is growing and obviously IOT is growing two together are probably going to cause a rise in embedded systems is my theory. I could be wrong. I could be right. Just trying to get a pulse on where you see that area.
[00:06:31] Hariharan Ragothaman I think that's an interesting question. I really like that. Something that I've been thinking is if you look at companies, if you look at the big tech companies, they're trying to pretty much most of their code that's been written, it's estimated that it's going to be written by AI. But when you really consider sectors like that involve security or embedded systems that are more hardware specific, AI could be a great assist. But it also requires a lot of context on the hardware architecture or even the security mindset that is needed, which might also involve a human perspective. And I think that would take a while for AI to actually get better at. But suddenly, I feel like the fields of embedded systems and security are definitely going to have a lot more prominence given that the other fields are getting automated.
[00:07:20] Joe Colantonio How do you get that security mindset? A lot of people just focus in on like one thing and for some reason security to me, I think they think it's a different group that handles that.
[00:07:29] Hariharan Ragothaman Yeah, once again, that's a great question. Going back to my experience, when I started my career, I never really cared about security in the first place, to be fairly honest. I really cared more about getting stuff right and maybe pushing stuff to production and then shipping things fast. I think everyone starts there. But in one of my previous employees where I worked at, I started architecting DevSecOps pipelines. I had the opportunity to manage large-scale Kubernetes deployments at scale. And that was when I really had to care about security, not because I had the security mindset in the first place, but to be fairly honest, I was working in a very regulated industry where the pipelines were subject to multiple audits in different quarters. We really had the care about the security. That's when I began to wonder why security is important and how we potentially have to care about security at an early stage of the software pipeline. And embedding security at every stage of the software development lifecycle became really crucial to ensure the overall success of the project. In fact, some of my projects involved about shifting security towards the left. And to quote from some of the past experiences, we started creating software build of materials for all the software that we were shipping. And we wanted to catch vulnerabilities at a fairly early stage. And that's when we started integrating several tools right from the point when a developer builds code in their IDEs to different means of scanning, different when it gets built and different when gets delivered to different content registries or gets deployed to different cloud environments. I think to answer your question, I started caring about security when I started to work for a regulated employer, but then harnessing that security mindset became a skill through a bunch of certifications or like online courses that I did. And also, I think it was also easy for me to naturally progress to have a security mindset is because since I'm from embedded systems and whenever I had an issue that I had to debug, I had, for example, let's say I have a memory leak, when I'm building something, I have to use tools like Valgrind or I have use a debugger to really figure out where the issue is coming from. Flipping the coin or looking at it from the other side. I do care about how a particular system can be breached or how a particular system can be like exploited. That's when I started slowly building that security mindset and started caring about how I can build in processes and build in systems that are more secure such that I deliver scalable and secure software.
[00:10:08] Joe Colantonio You mentioned tooling. Are there any particular tooling that you recommend along the stage of the pipeline? From checking a scanner or a sniffer is that a must have to check your code for any security vulnerabilities? First off, how do you shift it left and then how do keep it going all the way through the pipeline?
[00:10:25] Hariharan Ragothaman I think I shouldn't probably, every time, say it's a great question. I always feel like it's good question. It's a good question. I think, if I understood right, I think you're asking for two things, as to what are the steps I took to shift it towards the left, and at the same time, what are some open source tools or tools that I leveraged. I think started off using tools that this company, JFrog, provided. I think they had a tool called Xray, which could scan for vulnerabilities. I mean, it's not open source, but you kind of like, you have to, it is proprietary. If you eventually subscribe to JFrog products, that's how we started off with. And so the Xray product itself could be used, could be integrated with your IDE. So as you're writing code, it could pretty much tell you like what your various vulnerabilities are. And you could also like scan your packages or scan your Docker images after you deploy them too, and then report on them. But to quote some open source images that I have actively used, I've used Nuclei, which is an offensive security scanner. And I've integrated with tools like PagerDuty to alert you whenever there's an issue. If you eventually end up building your own DevSecOps AI agent, you could pretty much integrate Nuclei to it. And it could potentially be like a smaller team that you have in your own app top or your terminal. That's a possibility. The other tool I've also used is. I mean, going back when you use the JFrog platform, and when you actually like upload your images, they have the capability to generate software build-up materials. But given that's proprietary, but if you could leverage open source tools like, say, SIFT, which is an S-bomb generator, it can generate bombs both in SPDX and other formats of interest. You can pretty much flag unknown licenses or dependencies that are vulnerable fairly very easily. And I think the understanding or the mindset to shift left in the pipeline is about ensuring that security is not just a gate that you need to tick off. It's something that has to be embedded at every stage of your pipeline so that the whole pipeline is secure as a whole. I think that's the overall mindset rather than trying to shift it towards the left.
[00:12:31] Joe Colantonio It seems like you're always learning new things. How do you stay up to date while the latest and greatest? Do you have a methodology you go through? Is it just something you've always done? Do have any habits or mindsets used to help you really keep going and learning new things?
[00:12:44] Hariharan Ragothaman I mostly operate from a first principles approach. I try to keep my fundamentals strong. I always practice. I sort of divide that into, say, two to three sectors. One is having the proclivity or the ability to code concepts or ideas that you're able to formulate. The second is to have a really strong understanding of systems and able to design systems at large. The third sector would be more about having some domain-specific knowledge that maps your area of interest. I would summarize them as coding, systems approach, and then a domain that you would like to associate yourself with. I think from a coding standpoint, I think I frequently practice on web. All the websites are available online, which is like lead code forces and so on. And from a systems approach I tend to follow a lot of creators, both on YouTube and Substack and a few other places where they part of deliver tidbits about how to improve your systems thinking. And I don't know if I can refer some books right now, but I've always liked by -
[00:13:47] Joe Colantonio I love books. Any reference to books, if you have them on the top of your head, but you don't have to.
[00:13:53] Hariharan Ragothaman Yeah, sure. I mean, like I keep rereading the design of data intensive applications. And I'm also a big fan of the system design interviews book for both part ones and part two. I think I don't know if I'm pronouncing the author's name right, I think it's by Alex Hsu, who is I think that's a really good book and gives you what you need to think from a high level view and different in implementing software systems. And think of it like design or data intensive application is potentially like the systems level thinking Bible. And the system designs interview by Alex Hsu is more of an abridged version of that. And from a domain point and to move to the third piece, from the domain aspect, I think I've been all over the place. I think in my early stage of the career, I've been trying to improve my understanding of computer architecture, embedded systems, and I've always tried to refine my knowledge in that space. But then once I moved to DevSecOps and security, I started to care more about certifications and care more about listening to a lot of tech talks and trying to be up to speed about all the latest industry trends in terms of what's happening in that space. And that's how I got into, I think the recent past, that is how I get interested in how AI was being applied into DevSecOps and how that could be potentially game-changing in the coming years.
[00:15:08] Joe Colantonio All right. So with AI, though, do you see tech skills still being top priority or do you see more shift into now more leadership type of management skills or do you see it not as a change or either way?
[00:15:22] Hariharan Ragothaman So I think finding the right balance between tech skills and leadership is crucial because with the given advancements in AI, where we all have ChatGPT and Gemini and Claude and a few other tools at our disposal, I feel the entry, the barrier to actually enter this field has been drastically lowered and the space has been completely democratized. You just need your phone to start to code pretty much. And that's definitely there. On the other hand, leadership, tech leadership, or finding really niche use cases where you could potentially even wipe code and find and build something innovative is also prevalent. But to answer a larger question in terms of how do you build scalable products or how do you build products that would be really robust and can stand the test of time, that would still demand a lot of technical prowess and knowledge. It might be a little easier in certain industries and can be really hard in other industries. For example, if the product that you're building is really like, say, take a website, and that's it. And that's your entire product. AI can really spin it up really fast for you and probably even take it to production and it can really scale. But if your product is going to be something really niche, let's say you're trying to build an edge server and you're going to use a bunch of, and it's going to cater to a specific subset. That's going to demand a lot more niche knowledge. Let's say you're trying to build something that's gonna be integrated to a data center. Let's say, you're building a network abstraction that is going to interface to network switches or something of that sort. That's going require a lot of domain knowledge. What I feel is if the problem statement that you're tying to solve requires a lot of domain knowledges and technical prowess and addresses a niche space, AI alone wouldn't solve the problem. But if that being said, leadership is definitely important. I feel like tech leadership is sort of like a common denominator. It should always be there. But that being said I think I'm more like trying to answer the question in terms of when AI would really be great assist and when you probably should let AI, rather tell AI that you need more context and I have more context I'm going to provide it to you. It's more of that. And I feel tech leadership or really being really caring about the overall momentum and how when products are delivered or taking scaling things from zero to one. I think those are common denominators that will always be there. They are like perennial characteristics that we should always care about. But I hope that painted a picture.
[00:17:56] Joe Colantonio I've been vibe coding with the cursor for a bit, and I know AI is going to get better, but it is uber frustrating. Like it can create something really quick that looks good. And then when you get into it, like this is not what I want. So I could definitely see you get lulled into thinking AI is really doing something great until you actually start looking into it. I don't know if that makes sense, but.
[00:18:16] Hariharan Ragothaman It definitely makes sense like I can be very explicit right when you have like super interesting one of the side projects that I'm working on is I mean it's not a big secret i can just tell this on this podcast.
[00:18:27] Joe Colantonio You heard it first, right there.
[00:18:31] Hariharan Ragothaman So see, one of the side projects that I'm working on is, so I was talking to my young cousin who was just trying to read books. With the influx of AI, I wanted to make a side project for him. One of the projects I wanted to do is imagine you're reading Harry Potter or something like that, and let's say you've read 50 to 60 pages of that book. And let's say, you want to interact with one of those characters. One of side projects that I was working on is, let's say, you have a Harry Potter e-book. And then you flip 50 pages, and then you kind of know who is Harry Potter, who's dad is, who Severus Snape is, or who Dumbledore is, and so on and so forth. And then let's say you want to talk to one of those characters. And so I kind of wanted to create an interface where you talk to one those characters, and then there's an avatar of that character that shows up, and the child could start interacting with those characters. When I tried to vibe code this, the AI could easily generate a GitHub scaffold for me and could really get me started. But then when I had to really build out the subcomponents, like, say, generate a 3D avatar of that character that I care about and show it on screen, or if I had interact with the character through some context, and I've had to care about context as it through different pages, those were harder problems that I could not just wipe code. I needed to have a lot of domain knowledge on how to actually do that, and I also needed to care about how to deploy that, and that those are slightly harder problems that, A, could not just solve a jiffy, but it's more of a shared partnership with AI that I have. It could definitely do a lot of trivial stuff. Similarly, I mean, imagine, I'm just talking about the problem statement. Imagine trying to test this out. When you try to write automated tests for this, that's a huge challenge by itself.
[00:20:13] Joe Colantonio I know all the rage now is a Gentic AI agents, MCP servers. Any views on that? I think there's a paper that was just released. Just curious to get your views.
[00:20:21] Hariharan Ragothaman Yeah, I recently read the paper. I think it was titled, The Illusion of Thinking, Understanding the Strengths and Limitations of Reasoning Models by the Lens of Problem Complexity. The way I see it is, I think, in the recent times, there's been a lot of chatter about AI agents and agentic AI. In the simple terms, AI agents are super efficient, and they do things autonomously given when there is a directed goal. And agentic AI in very simple terms would be about just multiple agents interacting with each other. I feel the whole term of agentic AI is more semantic rather than, if you try to find the difference between multi-agents and agentic AI, if I feel like they're more or less similar. And the recent paper that Apple researchers released, they tested the frontiers of large reasoning models. And I feel overall, I feel that the paper is like fairly balanced and I feel the paper has like strong takeaways. But the larger question the paper tries to ask is like, if these models that we're actively using through Claude or Open GPT but that they have two reasoning capabilities. As a user, I feel it gives you at least the feeling that they reason really well. But I think the paper sort of pokes into the fact that if you give it super hard synthetic puzzles, can these models actually generalize and come up with really novel solutions? And I feel that's just one slice of the pizza that they're looking at. And I think that the title was certainly meant to poke, or trying to prove a point. I think I feel that was my last take away from it and I feel overall all the AI tools that we use they will struggle. They will struggle when you give them like niche problems or problems that they have never seen before. It's always going to be there. And as always they're going to learn and they're going just keep getting better and better. Recently i've started using 03-Pro and trying to test its capabilities, but that's my overall take on it.
[00:22:10] Joe Colantonio Yeah, so do you see like OpenAI and Microsoft have been making some outrageous claims? They're even doing layoffs saying AI is replacing things. You think that's more of a play for profits rather than actual results they're seeing? Because Apple seems a little more balanced. They don't seem as bombastic with AI as if Microsoft has been. I don't know if that makes sense, but.
[00:22:30] Hariharan Ragothaman Yeah, the question completely makes sense. I think from my point of view, I think I can merely speculate because I think even in the recent, one of the recent conferences that happened, I think it was a Llama conference that Meta organized. I think there were takeaways on how much code is being automated through AI in Meta and how much code is being automatic in Microsoft and so on. Whenever I hear those statements, I feel there is a lot of scope in code being automated, for sure. For example, if you're written like Docker files or Terraform code or any of such cloud configuration files. You do know that they could be automated down the line, when you have clear specifications. For example, Google recently released an AI version of kubectl. You could pretty much give it NLP commands, and it could maybe create another replica set, or change the configuration of your Kubernetes cluster, and so on. You do know that space, there's a lot of automation influx that's happening. But whether they actually translate into actual reduction force, I'm not entirely sure about that. But that being said, I think both all the big tech companies and other tech companies, as AI starts automating trivial tasks, we would end up starting to move on to other higher order problems that need the demand solving and or other niche problems that would need a human perspective or context. So that's how I view it personally.
[00:23:51] Joe Colantonio Great advice. Okay, before we go, is there one piece of actionable advice you can give to someone to help them with the DevSecOps efforts? And what's the best way to find or contact you?
[00:24:01] Hariharan Ragothaman Great. I think from a DevSecOps standpoint, I feel like, as I mentioned, definitely get a good hold of coding. It's definitely always a great skill to have. My personal recommendation is start with Python and then move on to C++ or any other language of your choice. Definitely highly recommend the books such as Design or Data Intensive Applications and System Design Interviews by Alex Hsu. Also start playing and if you're really interested in the DevSecops space, start playing with a lot of tools and read open source or proprietary. Every tool is going to give you a certain takeaway. That being said, also remember to zoom out and think about like, okay, I have all these tools, what can I build with it or how can I use these tools in the right way to orchestrate something I really care about. Kubernetes is something that's always a great thing to learn, but I feel it's like a deep dark ocean that you would have to know how to sail through. But that's a great skill to have in the DevSecOps world. And also if you're really into like security doing a bunch of certifications given by the various cloud providers or even OWASP or even ISE, all these ISE. They are really helpful. I think that sort of covers it to an extent. But again, remember that something that I keep telling myself is it's always prudent to, whenever you're trying to learn something new, it's obvious prudent to take both a depth first and a breadth first approach. The first, I always try to first analyze the lay of the land and try to see, okay, what are the various things that I can, what are various tools and avenues that I have to learn or they can learn from. And then pick and choose what I want to learn in depth. So that's something that I did. So even from a programming language standpoint, I learned a few programming languages then I decided that I need to really get good at Python. And similarly, like when there were a lot cloud certifications available, I picked AWS and I needed to get good. Similarly, I picked Kubernetes and I wanted to get better at it. And then maybe when you are trying to, say learn how lots of deployments happen and so on. There are a lot of tools that I had been introduced to, like FluxCD, ArgoCD, and so on. I picked Flux CD, although it may not be the best of choices. I tried to become good at it and tried to see how I could leverage that in the GitOps model that I have both professionally and personally, and I could manage deployments. So it really comes down to having a breadth-first approach first, initially, and then try to understand what are the various things that you could leverage to either master skills or try to build a product. And then pick and choose things that you really want to get good at. I think that's the advice I would give. You could message me on LinkedIn or follow me on LinkedIn, also email me and I think both work.
[00:26:36] And we'll have all those links for you down below.
[00:26:39] All right, before we wrap it up, remember, frustrated users quit apps. Don't rely on bad app store reviews. Use SmartBear's Insight Hub to catch, fix, and prevent performance bottlenecks and crashes from affecting your users. Go to SmartBear.com or use the link down below, and try for free for 14 days, no credit card required.
[00:27:00] And for links of everything in value we've covered in this DevOps ToolChain show, head on over to testguild.com/p194. So that's it for this episode of the DevOps ToolChain Show. I'm Joe, my mission is to help you succeed in creating end-to-end full-stack DevOps ToolChain awesomeness. As always, test everything and keep the good. Cheers.
[00:27:22] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:28:05] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.