CTO Insights on Gen AI’s Impact on DevOps with Matt Van Itallie

By Test Guild
  • Share:
Join the Guild for FREE
Matt Van Itallie Promotional graphic for a DevOps toolchain podcast episode featuring CTO insights on the impact of Gen AI on DevOps, with guest Matt Van Itallie, supported by Bugsnag

About this DevOps Toolchain Episode:

Today, we'll speak with the remarkable Matt Van Itallie, CEO of Sema, to explore the rapidly evolving intersection of DevOps and generative AI.

In this episode, we delve deep into how CTOs can effectively communicate the value of clean and legacy code investments to non-technical stakeholders by focusing on tangible business and technology outcomes.

Discover how the groundbreaking Gen AI tool is revolutionizing internal communication within companies, fostering enhanced developer productivity, and speeding up project timelines. Matt will share his expertise on setting up a developer council, utilizing Gen AI to strengthen DevSecOps, and the importance of the open-source bill of materials in today's tech ecosystem.

We'll tackle the crucial subject of mitigating legal risks, data leakage, and the security aspects of code generation with AI tools, ensuring that you're equipped with strategies to responsibly incorporate these robust solutions in your workflow.

Learn the pivotal role this innovative Product plays for CTOs and how developers can advocate for its adoption. And for something a bit lighter, we'll even mention a quirky seasonal creation, “Peepshi,” blending sweet treats in an unusual sushi form.

Stay with us as we explore insights into AI's impact on software development, understand the need for comprehensive AI policies, and grasp the concept of a “Gen AI bill of materials.” This episode is jam-packed with actionable information for any DevOps professional navigating the shifting tides brought upon by generative AI.

Listen up!

TestGuild DevOps Toolchain Exclusive Sponsor

BUGSNAG:  Get real-time data on real-user experiences – really.

Latency is the silent killer of apps. It’s frustrating for the user, and under the radar for you. It’s easily overlooked by standard error monitoring. But now BugSnag, an all-in-one observability solution, has its own performance monitoring feature: Real User Monitoring.

It detects and reports real-user performance data – in real time – so you can rapidly identify lags. Plus gives you the context to fix them.

Try out Bugsnag for free, today. No credit card required.

About Matt Van Itallie

Matt Van Itallie

Matt Van Itallie is the Founder and CEO of Sema.

He learned to code on a Commodore 64.

Sema's AI Code Monitor helps engineering teams capture the benefits and manage the risks of using GenAI tools to code.

Connect with Matt Van Itallie

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Hey, it's Joe, and welcome to another episode of the Test Guild DevOps toolchain. And today, we'll be talking with Matt, all about all things like code in the future, how to bridge the AI divide in software development. And I'm gonna touch on maybe how to make your CTO happy. Or if you're a CTO, this is definitely a must-listen as well. If you don't know, Matt Van Itallie is the founder and CEO of Sema, which builds a bunch of cool tools to help companies really capture the benefits and manage the risks that come with generative AI. Probably what things you're probably struggling with now. And so you're gonna learn how to bridge the gap between tech and non-tech audiences, and why this is so important, especially nowadays. You want to miss this episode? Check it out.

[00:01:03] Hey, if your app is slow, it could be worse than an error. It could be frustrating. And one thing I've learned over my 25 years in industry is that frustrated users don't last long. But since slow performance isn't sudden, it's hard for standard error monitoring tools to catch. That's why I think you should check out BugSnag, an all in one observability solution that has a way to automatically watch for these issues real user monitoring. It checks and reports real user performance data in real time so you can quickly identify lags. Plus, you can get the context of where the lags are and how to fix them. Don't rely on frustrated user feedback. Find out for yourself. Go to bugsnag.com and try for free. No credit card required. Check it out. Let me know what you think.

[00:01:55] Joe Colantonio Hey, Matt. Welcome to the Guild.

[00:01:58] Matt Van Itallie Thanks so much for having me, Joe. I am so excited to be here.

[00:02:02] Joe Colantonio I am pumped to have you because I don't think I've ever talked about someone from more of like a CTO perspective, and I think we'll touch on a few of those as well. But I'm just curious to know before we do, is there any that I missed in your bio that you want The Guild to know more about first?

[00:02:16] Matt Van Itallie No. You did a great job. I do really like making Peepshi. I feel like I should bring that up. You want to guess what Peepshi are or do you already know?

[00:02:23] Joe Colantonio I don't know.

[00:02:25] Matt Van Itallie All right, Peepshi. It's the right season for this is an amalgamation of sushi made out of marshmallow peeps. Nondenominationally, you can take the marshmallow peeps and those that's like the fish rice cripy treats for the rice. And then sushi rolls. The seaweed wrap is fruit roll-ups. If you have anyone in your house, is a child at heart or is actually a child, it's the most fun thing to do in the spring.

[00:02:53] Joe Colantonio Oh my gosh, I have to google this. This is totally new to me. That sounds awesome for sure. So speaking about new and things I'm just learning about and that's just coming up. Generative AI. Let's just jump right in. It's been a hot topic the past year and a half especially, but it seems like your company has been around longer than like ChatGPT and things. How did you get into generative AI? Did you start from the beginning knowing AI was going to be really good and start taking off? How did you? It seems like you timed it perfectly for what you're doing right now.

[00:03:22] Matt Van Itallie Oh, that's very kind of you to say. I'm not sure that's true, but I will gladly take it. All credit to our great team and our great clients. We've been around for seven years, and you're exactly right what long before the rise, the rise of generative AI, we got started out helping provide, let's call it a CTO dashboard in an instant. We call it a comprehensive code base scan, and it summarizes all of the key metrics that are most of the key metrics, I should say, about an engineering team in the form of report, kind of like a credit score report based on that in absolute honor to help evaluate over $1 trillion worth of companies. We spent a lot of time thinking about, well, how do you summarize what's going on in the code, and how do you summarize what's going on with coders in a way that is defensible to technologists in the detail, but also is accessible to non-technical audiences? We all know sometimes get lost in techno speak, and so we had built that tool and been working on it for over six years when some of our amazing clients came to us and said, AI is coming, Gen AI is coming. And it involves understanding the impact on code, and it involves understanding the impact on coders. And we were very fortunate to follow their guidance and help support companies making the transition to Gen AI.

[00:04:43] Joe Colantonio If I'm a CTO, why should I care if the code generated by Gen AI like what kind of insights would I need to know? Oh, this code was generated by Gen AI.

[00:04:53] Matt Van Itallie Yeah. Great question. The first and primary question is do you want your organization to be using generative AI tools to code or not? We at Sema are extremely bullish. We think the return on investment measured by developers being happier as well as more productivity, being able to build a roadmap faster really outweighs the cost and the risks are mitigatable. You need strong positive benefit, low cost, including low cost to implement and the risks need to be mitigated. But that is a question that legal counsel needs to weigh in on. And this is a powerful tool and used in when used not safely, it can really create some risk for the organization. We've seen many cases where organizations don't really have a formal policy yet, and developers who want access to this are bringing their own LLM license to work. And we really, really, really strongly recommend that organizations make a policy that the answer is yes, and then give every developer access to the enterprise grade tools with all the safety, etc.. Step one, should we use it at all? And if you don't want to use it, then of course you should monitor to make sure that people aren't. If you are using it, the two big reasons additional reasons to monitor are to help support developers the sensible use. So if you do it at a developer by developer level, to some extent Gen AI is radically different. But in another way it's just an extension of copying from StackOverflow, of using open source code, right? It's code you didn't write that could make your life easier and better. With all love to junior devs, it's a tool that is needs to be supervised, right? Because they don't have as much context and wisdom as some of the older engineers. You want to look at it by developer group at least, and help make sure that they're making good choices. And then finally, there's a little bit into the future. But there are internal. There are, I guess I should say. And finally, looking into the future, there are external stakeholders who will ultimately care about Gen AI composition. Folks today, you think about open source composition. You need it frequently in conversations with insurers. You need it for procurement, certain procurement processes. You need to keep track of it in diligence to show how much open source you're using because open source is very powerful, but it comes with some associated risks. We believe Gen AI, which also comes with some of those risks, even though they're mitigatable, will also lead to companies needing to have a Gen AI bill of materials, along with their open source bill of materials.

[00:07:33] Joe Colantonio Absolutely, open source bill of materials was a hot topic a few years ago. Become more important. I definitely agree with you. I don't know if you can answer this. I guess it all depends. It depends on risk. But like legalities that could come up. I'm just thinking like in the future someone's like, oh, you wrote your code, your application based on code that I can trace that was trained on mine, unbeknownst to me. And so now you have IP impact and all that. Are those some of the types of risks that we have to worry about. Is that maybe little to, you can't predict the future, but it could go that way.

[00:08:04] Matt Van Itallie It's a great question, Joe. For sure, one of the major risks involved with using generative AI tools is potential legal risk. That's number one. The other three big ones at a broad category are data leakage. Your information's getting out into the world because you're putting into Gen AI, to the LLM, and it's going elsewhere. Third is code quality. And fourth, that the code may be less accurate if we all hear about hallucinations with Gen AI. The impact of hallucinations in code writing is that the code looks right, but is incorrect. And the fourth is code security. Despite the very strong investment that the LLMs are making in code security from the AI code, it still is not fully secure. No one would expect that. And so. So those are the four big risks, I promise. I'm going to get your specific question. The way that you solve data leakage is make sure that your team is using the right corporate approved tool, and all the data is in privately, and there's enterprise grade products that can do that. You solve code quality issues just like every other code quality issue, code reviews, automated tools where you have them and keeping track of its usage. You solve code security the same way. Again, I don't think anyone thinks this, but just to make sure just because I come from Gen AI, doesn't mean you can skip your security gates. Absolutely not. You should put it through for SAS and DAS scans and CV scans. So those are three of the four. Now we're left with legal. We're in legal. And I should say besides Peepshi I also love taxonomies. Apologies if I'm being too taxonomical as we go. Within legal, there's a couple different forms of risk. The one that you said, Joe, now I can get to it. One is the likelihood that creators of the training set data that were used to create the LLMs will pursue legal action against the users of LLMs? So an open source community taking action against because that's where the code was trained on taking action against a user of an LLM. Did I capture your question right?

[00:10:14] Joe Colantonio Absolutely. Yep.

[00:10:16] Matt Van Itallie Okay. Sema assesses this risk as very low, and I promise I'm only going to say it one more time. It is not legal advice. This is why you need to talk to your lawyers.

[00:10:24] Joe Colantonio Yep.

[00:10:25] Matt Van Itallie We think it's very low for a couple reasons. One, in many of the enterprise grade tools copilot, as an example, you can turn off the wrong kind of snippet duplication, and you should most certainly do that. And that's one of the reasons you should get an enterprise grade level. Second, even without the toggle on, very little code is directly copied in. It's less than 1% of what comes through, says some very trustworthy sources. Three, the enterprise grade LLM providers have indemnification clauses which say if you use our product the right way, you can rely on us to protect you. If someone from the creators, we're we're coming after you. And then fourth and finally, it's extremely unlikely in general for open source creators to take action against users of open source. And now it's even more attenuated. You should always, always, always be a good supporter to the open source community. If you're not, if your organization doesn't allow volunteer time, paid volunteer time or buy licenses, please go do that immediately. It's absolutely the right thing to do on a variety of levels. But for all those reasons, we are not worried about the legal risk from the creators of the training set.

[00:11:38] Joe Colantonio Alright, cool. I'm a CTO, I don't want to have to go through tons and tons of reports. I have a policy that maybe says no Gen AI should be released. Do I get bubbled up in insight or is it just send me a flag and say, hey Joe, just checked in code that's generated by AI maybe or does because I don't think this is a DevOps tool per se, so I don't know if it lives in your CI/CD we'll swatted down. So how does it work?

[00:12:03] Matt Van Itallie Yeah. So right now our product is a CTO dashboard. And in beta is a VS code widget. So we can so developers can see their results as they are coding. We will have alerts very soon. Right now, it is by log in. You can see a dashboard with custom rules. So if you say I want zero, if it's more than zero you're going to get the flag when you log in and traffic lights and all of that.

[00:12:33] Joe Colantonio Nice. So I like to be optimistic. But is there a way like say you hire someone that's supposed to be a senior developer and you just want to say, okay, I'm a CTO, I just want to check this guy's code and he checks it in. And you notice 100% of his code is always Gen AI. Is that a concern? Is that something that could help you just know, like, oh, I better keep up on this or just check in to see what's going on?

[00:12:57] Matt Van Itallie Yeah. Super good question. We'd like to and we've been understanding code bases and interpreting a credit score of code for seven years and 850 organizations. In Sema, we always like to start with curiosity about data, especially data about code, then judgment, in part because there's so much context. Code is a craft. It's not a assembly line work, up isn't always better, down isn't always better. So we like to lead with curiosity. That's part one. Part two, we are at the beginning, we, the industry of this incredible journey of figuring out the optimal amount of pure generative AI code and blended generative AI code. Pure generative AI code is copy paste it in that any modification you know, can you tell it from the names? Blended means developers modified it. The lesson from open source, that folks have used is third party code that they didn't write, that they can largely rely on is to not touch it. Okay, I found the right library, I'm going to reference it, and I am definitely not going to modify it because that creates legal risk and that creates security risk. The opposite is actually true for generative AI. Pure code is more dangerous than blended code. That doesn't mean you can't have any pure code. But if your code, it literally it's 100% untouched by humans, then it didn't get the security check. It didn't get the quality check. It probably doesn't have the context associated. We like generative AI usage, generative AI usage in the SDLC. But we want that blending. And again, I can't just spent too much time working with engineers. Not to say this, any CTO who had this observation, the first step is let's go have a conversation with your teams. Engineers are so smart. Have them here from legal. Here's the pros and cons. Here's what you need to think about. There's nothing worse than top down, just cringe and thinking about it, of judgment before you've given developers a chance to understand what great looks like and the why, the why behind it. And we're always bullish on engineers doing the right thing when they have all the information.

[00:15:11] Joe Colantonio And this may sound low tech, but communication is so important in a culture and it seems like a tool Iike this like you said. That can then open up communication between the CTO and the developers and say, hey, we noticed this while you're doing that? And get the look at the flow of communication going between the teams, which is healthy.

[00:15:27] Matt Van Itallie So exactly right. And this is not a gotcha. It's not a surprise. It's a group exercise. And hope this isn't too salesy because it's only a white paper. We wrote a paper about a month ago on unleashing developer productivity. When you're using a Gen AI tool, you don't need Sema's tool to help you do that. But it's things like, let's set up a developer council with a range of perspectives and skeptics about Gen AI and advocates of Gen AI. Let's hear what the big goals are and let's just experiment. What would we expect to see this week? Do we want juniors to be using it this much? Do we want to have it blended that much and just look at the data together? It's not a secret dashboard, not like joint discussion developer contributing. If not developer lead, any new technology, that's the right way to do it. But it's especially the right way to do it on something. It's in their workflow as Gen AI.

[00:16:17] Joe Colantonio As you mentioned, it's not always black and white. Depends on different risk quality, probably different industries, different verticals, all that stuff. In metric, how do you know as a CTO what metrics then to look at, to know that, hey, we're doing well or is that once again subjective or is that depends on. Are there any go to CTO metrics? I guess the question.

[00:16:40] Matt Van Itallie I would definitely build this with the team and look at changes with the engineers and look at changes over time and see if you're heading in the direction that you would like as an organization to head. And I know that's a wishy washy answer, but we really do. When you sit outside in numbers and hope to people achieve them, that you can have some paradoxical results. That said, if you're looking at your most valuable intellectual property, let's say the core product functionality, we'd be pretty nervous about more than 25% pure Generative AI. So using in a sentence, 25% of that code came straight in copying and pasting from Gen AI, I would worry about the correctness, I would worry about the quality and security and a second one outside in. If you decide your organization's going to use it, I would move towards at least 5% of your code has Gen AI because that means one out of every 20 commits an engineer thinks it's useful to use Gen AI, and if that's not true, then I would unpack what is getting in the way of that because we do believe that Gen AI is useful in new code creation, in test writing, especially in test writing, in understanding legacy code, etc. so if you're below the 5% threshold of any use, I'd be curious. And if you're above the 25% pure usage, I'd take a closer look. And again, that's code base wide for important code. If you showed up tomorrow, Joe with if a developer showed up tomorrow with a prototype with 100% pure Gen AI, I'd say amazing. That's one of the most beautiful things about Gen AI is it just lets you experiment. So it really does need to be contextual for what parts of the code you're measuring of it.

[00:18:34] Joe Colantonio Usually when we talk about Gen AI, you see like the hype curve, but obviously, you're a company. You speak with a lot of companies, you're creating white papers. How many developers are actually taking advantage then of Gen AI. Have you seen last year like 40% where we've seen a 40% increase in Gen AI and we only see it going higher? Or do you see developers tried it and they're like, Aw! I'll just code it myself. Like where do we stand right now?

[00:18:57] Matt Van Itallie The best estimates we've seen is that over 90% of developers are using Gen AI tooling on a regular basis at home or at work or both. And at home, it's so interesting, of course, because it's such a tell, how you spend your free time is a real indication of your passions, a passions and interests and commitment to tools. We do not think this is a fad. We think this is conservatively. We think this is like open-source. And I don't have to be a genius to say that it is code someone else wrote that does some of the work for you and lets you do more of the fun stuff. In 2024, if you were trying to build a code base without open source, you'd be bananas. You'd be bananas because you're wasting time. You're not having any fun. Why would you not use it? Even though there are risks with open source, all those open source risk are manageable. A CVE detector, working with legal on the legal risks, etc. it is not bananas. March 2024 to not use Gen AI tooling. It is not bananas yet, but we predict by a year from now, max, in almost any possible circumstance, we would expect any CIO and legal counsel to deal with any perceived risks that they see. Again, that seems prospective that they're mitigatable now, but within the next year we think it will be at nearly full adoption, and it should be for all the same reasons that open source is at nearly 100% adoption. It's better for the organization and it's better for the individual developers.

[00:20:31] Joe Colantonio Absolutely. Can you just go a little over now, from CTO and I have your solution. How does it work. Do I have a Gen AI where I go tell me blah blah blah or it generates a report for me? Is that a dashboard that visualizes things easy for me? As a CTO, how does it make it easy for me to handle all this?

[00:20:51] Matt Van Itallie Yeah, it's literally a dashboard. This is a podcast. We'll just have to describe it. But get ready listeners. There's big number widgets. And so at the top of the page just think of a sandbox example I can think of without naming who it is. 5% of the code was Gen AI originated. That means that 95% was not Gen AI originated, could have been open source, could have been a developer, could have been copying from somewhere else. All of that 5%, 4% is pure since people didn't modify it and 1% is blended. One plus four equals five. Below that is trend data. How it's changing over time. Below that is groups of repos. We call them groups because sometimes it's product one versus product two, sometimes it's product versus tooling. So you create the groups you want of your repos. And with that custom rules. Let's say no restrictions on Gen AI usage for the prototype repos, which is definitely what we recommend. So you have a group called prototypes and it's always green. Anything you want is fine and it might be red if you're not using Gen AI. Other than that. And then you have other custom rules for the highest priority intellectual property, etc.. And then below that, developer by developer view of how they're using code, pure blended, not Gen AI, etc..

[00:22:09] Joe Colantonio Awesome. How hard is it to implement then? Do I need to have developers that do it for me or is it easy? It's like sits on top of something. How do I get it into integrated?

[00:22:20] Matt Van Itallie It takes about five minutes to set up. It's a GitHub API connection for those using connect to the GitHub API. And then if you're not using GitHub we have other methods. So a couple minutes of work.

[00:22:33] Joe Colantonio How do we get this in the hands of people then? I mean do you just target CTOs or is it sometimes if a developer developers listening would you recommend developer goes, hey, let me open up, tell my CTO about this, because I think it really will help them. And by helping them, it'll help us because then we have everything go smooth or something. I don't know what are some benefits for a developer than to to tell their CTO, hey, you should check this out.

[00:22:52] Matt Van Itallie Yeah, I really believe this is valuable full stop. I believe the faster that organizations adopt it, the more quality of work life there is for developers, the less time on, just think of unit tests that the difference if you're a developer and you're just try it. If you don't have a company tool, don't put anything proprietary in it. So find some open source code and try it. That's really important. Do not leak, do not leak, but just try putting it into a chat bot and say appropriate chat bot and open source code and say write me a unit test on it and you will not go back to the old way because it is so much better. If your team hasn't adopted it fully, then of course, or even if they have and they want to know how adopted it is. Of course, a tool like ours, we'd be delighted to help. You should feel free to contact us, of course, and folks can try it out for a bit for free, see what they think. And we're developers first, we're engineering first. So we're so excited for developers to try it out and give feedback and tell us what they think.

[00:23:52] Joe Colantonio Awesome. I don't know if I give you a chance to describe the product in more detail. I just jumped right into Gen AI and you probably have multiple products and services. So is this just a dashboard of CTO as you mentioned, unit testing. So does this help developers write unit tests like. Tell me a little bit more about maybe the solution holistically.

[00:24:10] Matt Van Itallie For sure. The unit test example was more about. So if you think about the software development lifecycle, there are 15 different subtasks. It could include understanding a new codebase. It could include prototyping. It could include bug fixing. It could include refactoring. Gen AI can help in different ways at different stages in that process. A Gen AI tool can help you write unit tests. A Gen AI tool can help you understand legacy code. Just explain to me what it does. And so those are things that Gen AI can help with. It hallucinates. You got a double check. This is humans must be in the loop. Sema does not do any of those things. That is what the LLM is for. Sema's Gen AI Code Monitor helps understand the level of usage of generative AI in a code base where it is, who's using it, allows organizations to set goals to work towards or in to see if they're in and out of to the tolerance level and then keeps track of regulations related to Generative AI. We do think at some point more regulations are coming, but there's some now for folks to think about, and we keep track of it all in one spot. So there's no big surprises.

[00:25:22] Joe Colantonio Great. I used to work for a health care company, and the FDA could come in and audit you. Having the tool like this, would this help? Is this, help you with an audit to say, hey, look, we have this under control. And here's the proof. We have this dashboard that shows you.

[00:25:35] Matt Van Itallie It does, it does. And we're at the early stages. So in the European Union, there are audits coming for AI usage, not for used by developers but in the product. If you have an AI engine that's deciding who should get credit or not, or underwriting, you're now going to have to prove that usage is appropriate. We do believe, just like open source, that there will come a time when you have to prove the provenance of human code versus Gen AI code. And so when that moment comes, a tool such as ours will be able to help. And some folks want to know in advance because they think it's coming sooner rather than later. And so this is getting the test results before you actually have to take the test.

[00:26:19] Joe Colantonio Love it. And that's a big point of regulations then, you keep on top of regulations. And I assume you update the software frequently. Front the CTO, I don't necessarily I need to be aware of it, but it automatically will take the latest and greatest regulations and maybe highlight areas you may not be in compliance.

[00:26:34] Matt Van Itallie Exactly right. And so probably the biggest one for fo lks who work at organizations that seek copyright protection. So there's a couple different ways that you can protect your code. In general we use trade secret, which just means if it's not open source, you keep it private. You don't let anyone look at it. Copyright is actually protecting the words that are written themselves. Not every organization does it. It's usually for the larger ones, but there is already a disclosure requirement. So if you are trying to copyright work that was created in part with Gen AI, you have to acknowledge that Gen AI was used and our advice, all right. Third time's a charm. Not again not legal advice. Ask your lawyers. But our advice is before you say yes or no, you should actually know what the answer is. So for in that situation, that's already a standard for disclosure. And you're exactly right. We don't-CTOs are busy enough. Goodness sakes. Lawyers are busy enough, Council are busy enough. So our job is to keep that up to date and keep those standards front and center for them.

[00:27:37] Joe Colantonio Awesome. So definitely a much needed tool. I think now in the age of Gen AI, if people don't have it, I think this is brilliant. So Matt, before we go though, is there one piece of actionable advice you could give to someone to help them with their CTO Gen AI development efforts, and what's the best way to find contact you and learn more about Sema?

[00:27:54] Matt Van Itallie Sure, so you can find me at mvi@Semasoftware.com, Matt Van Itallie. You can also find me and my colleagues at AI at Semasoftware.com. They both get to me eventually. The one piece of advice, this is a little bit general, and I've had the good pleasure of thinking about code deeply at Sema for the last 7 years. Two engineering leaders, I would say, and you got to believe me. I love thinking about code so much. So I say this from a place of love. When you are talking to non coders, you need to express the benefits in terms of if you're in a business, business outcomes or organizational outcomes or technology outcomes that directly impact organizational outcomes. I love clean code. I love figuring out I'm sticky, legacy code bases, and what a journey it is for developers to figure that out. That is a grand journey, but it doesn't matter to the business and isn't a worthwhile investment to the business unless there is a business outcome. We are going to get more customer retention because performance is higher? Are we going to get to greater developer productivity, which then leads to faster roadmap because they're spending less time getting up to speed in the code base. Getting up to speed in the code base faster is a technology benefit that then leads to a business benefit of faster throughput. Faster performance because the code's been refactored is a direct business outcome. So before you talk to you go to your C-suite or you go to your board of directors and you want to ask for something, can you frame it in terms of a business outcome? Second choice, can you frame it in terms of a tech outcome that will directly benefit a business outcome? And if you can't, I would say as a CTO you shouldn't bring it up because they're looking to you not just for technical leadership, but to bridge that gap between the tech world and the non-tech world doesn't mean you might not want to ask for help on it. Or I think we might be approaching this as a need, but think what does this investment in the code actually serve the organization? And if you don't have a great answer for that, work on collecting data to support that claim until you do.

[00:30:13] Remember, latency is the silent killer of your app. Don't rely on frustrated user feedback. You can know exactly what's happening and how to fix it with BugSnag from SmartBear. See it for yourself. Go to BugSnag.com and try it for free. No credit card is required. Check it out. Let me know what you think.

[00:30:34] For links of everything of value we covered in this DevOps Toolchain Show. Head on over to test build.com/p140. And while you're there, make sure to click on the Smart Bear link and learn all about Smart Bear's awesome solutions to give you the visibility you need to deliver a great software. That's smartbear.com. That's it for this episode of the DevOps toolchain show, I'm Joe, my mission is to help you succeed in creating end-toend full-stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers.

[00:31:07] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Testguild devops news show.

Browser Conference, OpenSource LLM Testing, Up-skill Test AI, and more TGNS125

Posted on 06/17/2024

About This Episode: What free must attend the vendor agnostic Browser Automation Conference ...

Harpreet Singh-TestGuild_DevOps-Toolchain

DevOps Crime Scenes: Using AI-Driven Failure Diagnostics with Harpreet Singh

Posted on 06/12/2024

About this DevOps Toolchain Episode: Today, we have a special guest, Harpreet Singh, ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

AI-Powered Salesforce Testing, Shocking Agile Failure Rates, and More TGNS124

Posted on 06/10/2024

About This Episode: What automation tool just announced a new AI-driven solution for ...