Generative AI for DevOps with Shani Shoham

By Test Guild
  • Share:
Join the Guild for FREE
Shani Shoham TestGuild DevOps Toolchain

About this DevOps Toolchain Episode:

On this episode of DevOps Toolchain, Joe speaks with Shani Shoham, an executive with vast experience in dev tools. They discuss the latest trends in DevOps, including the use of Chat GPT, generative AI, and the importance of data control in AI. They also focus on Kubiya.ai, a company that aims to simplify complex DevOps processes by using simple conversations. They cover how Kubiya can improve the DevOps toil experienced by DevOps teams and how it can streamline processes involving multiple systems. Additionally, our host highlights several features currently being developed to enhance the Kubiya.ai platform.

TestGuild DevOps Toolchain Exclusive Sponsor

SmartBear believes it’s time to give developers and testers the bigger picture. Every team could use a shorter path to great software, so whatever stage of the testing process you’re at, they have a tool to give you the visibility and quality you need. Make sure each release is better than the last – go to smartbear.com to get a free trial instantly with any of their awesome test tools.

About Shani Shoham

Shani Shoham

Shani has been the CEO/COO/CRO of 5 Devtool companies including Testim.io, which got acquired by Tricentis and 21 Labs, which got acquired by Perforce. He is now the Chief Revenue Officer for Kubiya.ai, a ChatGPT for DevOps.

Connect with Shani Shoham

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Hey, it's Joe, and welcome to another episode of the DevOps Toolchain Podcast. Today we'll be talking with Shani all about generative AI for DevOps. If you don't know, Shani has been on the show multiple times on the Automation podcast. I think this is the second time on what used to be the performance podcast. Now the DevOps toolchain. He's been a CEO, he's been a COO, he's been a CRO, for at least five different dev tool companies, including companies like Testim.io which I think got acquired by Tricentis, 21 Labs, 21 Labs, which got acquired by Perfecto and he's now the Chief Revenue Officer of Kubiya. I believe I've seen it real. Find out a ChatGPT for DevOps and really excited to have him back. And so it's a really cool smash-up of DevOps with AI and a lot of cool things that are going to need to know about. You want to stay all the way to the end. And I think you really get a lot of value from it, so check it out.

[00:01:12] This episode is brought to you by SmartBear. As businesses demand more and more from software, the jobs of development teams get hotter and hotter. They're expected to deliver fast and flawlessly, but too often they're missing the vital context to connect each step of the process, that's how SmartBear helps. Wherever you are in the testing lifecycle, they have a tool to give you a new level of visibility in automation, so you can unleash your development team's full awesomeness. They offer free trials for all the tools. No credit cards are required. And even back it up with their responsive award-winning support team. Showing your path to great software. Learn more at SmartBear.com today.

[00:01:56] Joe Colantonio Hey Shani, welcome back to the Guild.

[00:02:00] Shani Shoham Hey Joe, how are you doing? How are the chickens?

[00:02:02] Joe Colantonio Chickens are awesome. Actually, we lost a few of them, so we just got some baby chicks now. I try to make the flock bigger again. It's kind of tough when you have a hobby farm. Animals die, unfortunately, but yeah, it's good other than that, for sure. As I was saying in the pre-show, I always follow you, Shani. I almost see, it seems like you have a pulse on what's going on in the industry. And it seems like whenever something is about to take off, you seem to be right there in the midst of it, ahead of everyone else. And I've seen you on social media recently talking about this new technology called Kubiya. And I just want to get you on to talk a little bit more about what it is and a little bit more about generative AI and how it helps DevOps.

[00:02:44] Shani Shoham Yeah, happy to do that then. Thanks for the compliment. I guess, it's pure luck. So yeah, Let's talk a little bit about Kubiya. First of all, you passed the first test which is saying the company's name. Kubiya essentially comes from Kubernetes and initially we're thinking of doing something specifically for Kubernetes. And then the other thing is Kubiya is a ...., so but in summary what Kubiya does is what ChatGPT like for DevOps, ChatGPT is actually more of an attention grabber, the experiences like ChatGPT, but we don't really use a lot of ChatGPT. The idea is to make complex DevOps forces accessible to the rest of the engineering team using simple conversations. Again, anyone can just go over the chat tool and ask something that's infrastructure-related and Kubiya will understand the context and is able to have a bi-directional conversation that's contextualized to all engineering platforms. Again, knowing your Kubernetes namespaces, your deployments, your TerraForm orders, your S3 buckets, etc. with proper guardrails that are set by the DevOps team. And I'm sure we'll talk a little bit more about proper guardrails.

[00:03:57] In a nutshell, the problem that we're trying to solve is the DevOps store that the DevOps teams are experiencing today. When I talk to DevOps teams the ratio is somewhere between 1 to 20 and a good case to 1 to 60 and single DevOps for instance supporting whether it's 20 developers or 60 developers with productive requests, thousands of them coming in the middle of the day many times in all the hours and many of them require context switching, a lot of them require back and forth. And so if there was a self-service experience that does require developers to know Ansible and TerraForm and Helm Charts and Kubernetes and a bunch of other things, then again has proper guardrails so that's no one can just write something and suddenly your entire production environment is down then that would significantly reduce the toil on DevOps and then it would free up the time to spend more innovation and building better monitoring, but a lot of systems, observability, and other things.

[00:05:04] Joe Colantonio That was actually one of the questions I want to ask is why was this created? What problem does it solve? So it's mainly for toil or is there anything else that it was created to help? Alleviate. And is it for just developers? Like who is it created for?

[00:05:16] Shani Shoham Yeah, great question. So first of all, it's not just for developers. I mean, we even have customers where it's actually used more by the Ops teams and again, making infrastructure or complex processes accessible to solution engineers to sales teams even around like spinning up demo environments to program managers and others. There are two sides to the equation. So the first part is the engineering team. And again, it's not just developers, it's also QA that want to trigger a new job or want to pull logs from the failed pipeline or data scientist that need some kind of code for instance with seeding. And then on the other hand this the DevOps. Today when a developer as a data scientist, the QA engineer wants something infrastructure related, there are two solutions. For most companies, it's open in Jira ticket and then a few hours later, someone from the DevOps teams comes back to you with a bunch of questions and those back and forth and maybe require the approval of a manager or someone in finance. And before you know it, it's been three days. And that's the solution that most companies have. The other solution is you expose some kind of API to the developers or to the generic team, which requires a lot of maintenance, and requires the developers to know those APIs and remote them. Those APIs are typically very rigid. And so either way, it's not a great experience. And again, what we're trying to do is we're trying to, first of all, leverage the interface that you spend most of your day in, which is Slack, Microsoft Teams, Facebook, and Workspace, and again, just have a conversation, which is not really I mean, the way that you do that today without Kubiya, you get in touch with someone on the DevOps and you just talk to them and ask them for something. Same way with Kubiya. So there's literally zero learning curve. It's as natural as it can be.

[00:07:18] Joe Colantonio Great. And it's all in one place. It's in the DevOps or the developer's workflow almost, so they don't have to get up, find the right person and track them down. It's all from where the work and so it helps speed things up as well, I would think. Right?

[00:07:31] Shani Shoham Yeah. I mean, I'll give you an example. So let's say, I'm a developer and I want an expensive EC2 instance, and so I would open a Jira ticket and again, there's probably going to be some back and forth. At some point, someone in management is willing to approve that. And given that they're not like, Jira is not in front of them all day long, then there's going to be delays even just related to the handoff process. And again, if you're on Slack all day long and the manager gets a message on Slack and it shows a notification next to his app and in a manner or even during the call, they can just go and approve that or go and have a conversation with the developer that asked for that instance to get more context, it's inside your communication tool. So we save a lot of that handoff. And in a nutshell, we take a process that might take a few days and we narrow it down to literally 2 or 3 minutes at most.

[00:08:33] Joe Colantonio Nice. Once again, I'm just thinking of what people are already familiar with. They are probably already familiar with things like Chatbots or RPA, and this sounds a little bit like that. So are you just taking advantage of AI and slapping AI on top of this, or how is this different than a Chatbot or RPA?

[00:08:48] Shani Shoham Yeah, it's a long answer. The short version is first of all, Chatbots and Slackbots are very rigid in terms of the experience. You click on a slash, you get a list of options and that's about it. So it's not like you and me having a conversation, me saying I want to get the logs and the system understands that you want to get logs from a Kubernetes namespace. So that's one. The second thing, again, going back to that rigid experience, two developers are different. So one might be a Senior developer that knows exactly what he wants. And I want to pull logs out of this namespace and that deployment and another one might not know what he wants and so he knows he wants to get logs and it needs something to workflow in order to add a little bit of back and forth and some questions in order to have what he want to get done. And so in the Slackbot, the experience for both developers will be the same and that will go for that same rigid experience. A further difference is on the knowledge management side, and I talked a lot about kind of triggering DevOps actions, but the other challenges they're also facing is many times, I would say 20, 25% of the time when I talk to DevOps, the time is consumed by how do I do this? How do I do that? And there's a document somewhere, but nobody bothers like looking for that document, because that document is not easily searchable. And one of the nice things that Kubiya can do is you can just forward a bunch of docs and it will learn from those docs and you can ask a question and Kubiya will construct an answer on the fly with pointers to those documents.

[00:10:25] Shani Shoham And so again, this is a very flexible and open-ended experience. And 2 final things that I think make it different and are also important when you use AI in an organization are to have the proper guardrails and the proper controls. So again, two people on the engineering team might be part of different things. Different teams require doing different things. So one might be a data scientist and he needs access to GPU instances. The other one might be a developer that needs access to like simple easy to compute instances, nano, micro, etc., and Slackbot is I mean, you can probably build that experience, but it's going to be very complex and again, a very box experience if you will. And then on top of that, you need the ability to request employee access. You need the audit trail you also want the DevOps team also want to be able to kind of define the experience for their developers. So a good example I like to give is you can just list all the instances, but maybe certain teams you want to filter out certain instances, or maybe if they choose certain instances, you want to ask for the manager's approval, or maybe don't want to list the instances because the instances mean nothing for the data science team. You want to ask them what do they need the instance for? And then based on that, they empathize the experience with even start talking about like GitOps, TerraForm, and Ansible. Those are just some examples of how both experiences are very different than Kubiya's experience.

[00:12:03] Joe Colantonio I'm still trying to visualize it. It almost sounds like Siri for DevOps. You just say, Hey, Kubiya, do a deployment for me? So it's almost like a conversational deployment-type deal?

[00:12:13] Shani Shoham Honestly, we should hire to the marketing team. Before Kubiya started a little over a year ago and ChatGPT wasn't around then so nobody like it was very hard to visualize the experience. So we would call ourselves Siri for DevOps because that was the thing. Funny enough, we .... in CubeCon last year, which was around October, and it was right around the time that OpenAI started making a lot of noise. So yeah, we kind of got lucky that we literally .... around that time. You right. I mean, essentially you just go over Slack, and the same way that two people would have a conversation you either have a conversation with Kubiya, oh, by the way, you can have a conversation with someone, and that Kubiya in the middle of the conversation and ask you the question and it will deliver the answer as well. But yeah, I mean, the idea is you literally just have a conversation. Under the hood, we talked a little bit about guardrails the workflows, and triggering action is something that the DevOps team can control. And that's also, pretty cool. We have generative AI to create those workflows. The DevOps could just go and say, I want to create a workflow that pulls logs out of Kubernetes namespace and deployment and Kubiya will be able to do that. Again, a lot of usage of ALM and embeddings and all kinds of algorithms to deliver that experience.

[00:13:40] Joe Colantonio You mentioned a few times this was developed before. I mean, ChatGPT has been around for a little bit, but it seems like before ChatGPT really took off. So is this using ChatGPT an OpenAI? And if not, is there a difference? What are you all using? What have you seen? In the real world the use of this compared to if you're not using ChatGPT whatever you're using.

[00:14:00] Shani Shoham Yeah. So first of all, ChatGPT is a very broad term. You have GPT which essentially allows language mode that you have, which is powered by OpenAI and then ChatGPT is just kind of the experience of it. So we use at the end of the day engineering platforms hold very sensitive information. It's like the core, your technology. Definitely don't want to just expose that to like some public ALM. And that's also the reason I think that it's going to take a while for things like ChatGPT to get into an organization. We use our own ALMs, we use 4 different models of ALMs, we have a separate ALM for each organization that lives in a separate namespace. The reason being is a term like a virtual machine might mean different things in different organizations. And so we don't want, for one organization it an EC2, for another it's a VPC. We definitely, we want to keep it separate. The Vector database is also separate for each and every organization. And we use some form of GPT 3.5. But again, it's in a contained environment that we control. And it's not like the public ChatGPT. The only place where you can use ChatGPT is we have a plug-in to ChatGPT and if you choose to, you can create a workflow, for example, that takes Jenkins logs, which everybody knows it's a nightmare to wait for Jenkins logs and essentially send to ChatGPT. And it will give you a summary of the root cause, but it's under your control whether use that, we don't use any public ChatGPT. I think it would be a big security concern for a lot of our customers if you would.

[00:15:55] Joe Colantonio I'm just reminded of another conversation I had. A lot of companies when they get the software they want on-premise software because they're worried about security and say he doesn't see that happen because AI is so expensive. But it sounds like you have a way kind of around that and with your solution.

[00:16:11] Shani Shoham Kubiya architecture is actually a hybrid architecture. Most of the internal engineering platforms we actually trigger the actions for lightweight to operate or that sits inside their network typically on some kind of EC2 Kubernetes or something, and then they have full control over that operator. So if tomorrow they have a security breach and they to investigate or something, they can literally just cut, they can actually just turn off that operator also then they have control over which clusters that operator always filed off. You can also install different operators of different clusters and then you can separate, one operator for production clusters and one operator for developing clusters. And as I mentioned, the other part is essentially a vector database that holds the embeddings and the ALM. Each organization again has its own separate namespace. We can I mean, if organizations choose so they can even hold the database, the vector database inside their premise. And so again, there's full control over the data, over the information, etc.

[00:17:21] Joe Colantonio Nice. So what are their special sauce do you all have? How does Kubiya learn? I know you mentioned documentation. You can feed the documentation, and learns on that, but does it learn over time based on conversations between the team?

[00:17:31] Shani Shoham Yeah.

[00:17:32] Joe Colantonio Where it may answer something one way and they're like, well, that's not really right. Here's the real answer. And then it knows going forward if I when I start, does that make sense? How does that work?

[00:17:41] Shani Shoham I think we're going to high or we're going to transition. So yeah, spot on. So actually I think I kind of also alluded to that. Each organization is different. Actually the access Kubiya leverages reinforcement learning with human feedback. And so we actually learn from the experience of our users. And so for example, if you a user ask for something and we serve a list of workflows in a certain order and he chooses the first one after a few users will request the same thing, or even that goes all across the same thing multiple times. We're actually going to change the order by which we deliver the workflows or if we deliver something and the user said no. And we made a few other suggestions and choose one of those other suggestions, then over time, we'll actually bump up that workflow, that knowledge and create to be delivered first. It actually does learn from the interaction with users, which is very, very powerful. The other piece of technology is, as I mentioned, the DevOps team creates those workflows that make them accessible for their end user so that they have some control on some guardrails of what's being delivered. So we actually take those workflows. When you publish a workflow or take the workflow and create a description for that using ALM and essentially kind of mapping the schema into a description from that description will create embeddings. Those embeddings will then live inside the vector database. And when someone asks the question, we actually compare that to the embeddings and figure out if it's a workflow and knowledge entry, etc..

[00:19:28] Joe Colantonio So the guardrails also I guess has policies where it's not going to give you sensitive information that you shouldn't have. Like how do you know if it doesn't get a hold of sensitive information that I'm just like a junior programmer and also, I'm getting access, trying to know some sort of company secrets or things like that?

[00:19:44] Shani Shoham Yeah, great question. I think that's the other important piece generally, by the way, we had an in-person form with like 100 engineering last night here in the Bay Area and the conversation on the AI everybody will just talking about the AI. We had the roundtable on that. And what you know, I think my main takeaway from that is the data and some level of control over what AI does is what everybody wants. The other thing is, the responses need to be contextualized to the organization. We can talk about that later on. So there are a couple of ways by which we box the experience such that developers cannot just go and the entire production environment. First of all, we have access control and so you can define a person or group who has access to do what. So it's not just the resource level, it's also the action level because like I said, I mean data engineering teams require different instances than your typical developers. So that's one level of control. By the way, when you have access control, sometimes you need to give people temporary access because there's a production issue on someone on call and it's something. So you also need to be able to create some kind of overflow with all the QA. And managing TTL etc. The other level of guardrail is kind of how you deliver the experience and what can a developer do and what he cannot. So simple example would be, again, I list all the namespaces and I want to filter out some of the namespaces I want to filter out, like I said, you can actually take that operator and install different operators and maybe you want to give developers only access to the operator that has access to the developer cluster and not to the production ones. You can filter out some of the entries. You can take again the example of an EC2. Maybe the EC2 instances don't mean anything to most of the developers on the team. And maybe instead you just want to ask what do you want? What do you plan to use the instance or the worker for? And based on that provision of the white instance, instead of giving them the ability to create any instance that they want? Same thing for the security group. So maybe you want to ask, do you need a worker for an internal application or an external application? And then based on that provision, the white security group. So again, there needs to be some kind of contained experience that assures that you're not just going to use the AI and it's going to create some kind of code and you're just going to deploy that code then without knowing what it actually does.

[00:22:32] Shani Shoham Also, one final thing. One final thought, though, on that GitOps. So lots of organizations today use Terraform, Ansible, or Helm Charts. And so again, maybe you want to provision those instances using both of you ..... Maybe you want to use TerraForm and actually allow them to change just a few parameters before the Terraform gets deployed. Again, those are just some of the ways that you want to control the experience.

[00:23:03] Joe Colantonio Awesome. With a side benefit, be it someone else has an audit trail as well. Joe is the one that did this deployment or asked this deployment, did these three steps before, and all timestamped and you can trace it back directly.

[00:23:12] Shani Shoham Yes, totally.

[00:23:13] Joe Colantonio When and how did something they're broke?

[00:23:15] Shani Shoham Super important is the audit trail and the ability. First of all any action has audit trail on top of that, I mentioned, for example, an approval flow. I'm asking to provision if we allow instance and Joe needs to approve that. Again, we want to audit trail with ids and maybe even a few details about did he ask for that instance and why did Joe approved that. So again, all of that has proper audit trail yes.

[00:23:44] Joe Colantonio This might sound like a dumb question. How good of a question do you need to be able to ask in order for Kubiya to give you the right answer? I mean, sometimes ChatGPT, I know I could phrase it a certain way and there are certain prompts you can give it to get better results. Do you have any of those issues when dealing with Kubiya?

[00:24:02] Shani Shoham Yes or no. So start with a no. So what we do is contained to a specific domain, which is DevOps. And it's easier for the ALM to kind of understand what you're asking because again it contains. There's an automatic association with the DevOps domain Kubernetes jobs. Like if you ask to trigger a job, I know that you want to trigger a CI job and not some publisher job on LinkedIn. So I think the experience, it's easier on the one hand to build a conversational agent that is contained to a specific domain. Naturally, the more information that you give it, the more accurate it's going to be. But we also built the experience such that if you ask something and it gives you a list of options, that it's not one of those options, you can actually say, that's not what I meant. And it's going to try. It's going to iterate and try to give you either suggestion. And again, at the end of the day, since we're talking about, I don't know, five hundred workflows, a thousand workflows, and some knowledge entries, then it should. I haven't seen a case where at the end of the day I wasn't able to do what needed to get done.

[00:25:14] Joe Colantonio Generative AI is kind of becoming a buzzword now, but you actually have built-in workflows. Any feedback from your customers on how you see it working or percentages of how using generative AI has helped the DevOps process reduce toil by 10% or anything like that?

[00:25:31] Shani Shoham Yeah, I mean, I can tell you that companies that have deployed Kubiya, literally double the capacity of the DevOps team. I mean, think about what percentage of your day is like, this day to day request, and again, 20, 30% of the time, those requests are really just about giving someone access to a doc. So, yeah, I mean, we've seen organizations transition from the point of no automation to 80% of these requests being fulfilled by Kubiya somewhere between 4 to 6 weeks. A lot of the workflows come out of the box. So again, if you want to do things on Kubernetes, for example, scaling up, scaling down, getting pod information, killing pod, we start rolling out changing images, those sort of things. All of that comes out of the box. There's like it's very easy to get started even without creating a workflow. By the way, a quick story. A few weeks ago we had an issue in production and we have to rollout the deployment. CTO was on driving south to a vacation, he was literally on his car. He stopped on the side, open Slack up and on using his phone. It pretty much rollback production. So, yeah, so very cool. So that's a little bit in terms of the impact. I'll just fill out one of the things that comes in mind. We have a media company where like if they want to send a certain file to the customers, then, the file has to go through a couple of processes. And those processes today are manual and those processes take a while. And so you might be looking at two or three days until they get something done. And the nice thing is, Kubiya, essentially able to provide the information at the beginning of the workflow and then Kubiya essentially handles the handover from one system to the other. One of the powerful things is you can create workflows that combine multiple systems. So in the case of a developer, it might be, you merge a branch and you cross that Jira ticket or you move it to another stage or we just got approval for something. You move it from pending approval to something else. And so the ability going back to that use case, I mean, we took a process that took a few days that involved a lot of handover and someone from IT speeding up an EC2 worker and someone on the product management or the management team waiting for that worker, then doing whatever it needs to get done. And by the way, many times that worker stays either for a while afterward. Again, we're able to cut that process to literally a few hours of an automated process, plus, tearing down, and terminating their worker at the end of the job. So we save both time and cloud resources.

[00:28:31] Joe Colantonio What's on the roadmap? Is there anything the roadmap that AI has been it seems like you're just starting off almost with generative AI. Is there anything you see the future, how you could see built-in on the roadmap to make it even better or any other plan? Do you see AI playing role in DevOps that people may not even be aware of yet?

[00:28:50] Shani Shoham Yeah. So there are a couple of things coming down the pipe. So first of all, support for a wider range of chat tools, two, on documentation, there are a lot of use cases. I just talk to very large companies, everybody knows I'm not going to mention the name but they said our docs are crap. They're outdated etc. And so when you ask Kubiya a question actually already today you can give some feedback on the answer. And so if you think that the document is out of date, you can actually give us a thumbs down, then we can provide that feedback back to the team. But the next step is to actually take the conversation that's happening and you say, Oh, wow, this would have been a great doc on conference or notion and you can actually there, Kubiya and Kubiya will be able to summarize that conversation and put it into a doc so that the next time you don't have to manually answer that question. Yeah I mean there's more stuff. We're constantly expanding the integrations. We're actually using some code-generative AI to do that as well. So it's actually mapping the APIs and the SDKs into something that Kubiya can use virtually in a few minutes and trying to think, well, this is cool and coming down the pipe. The ability to discover your network, your architecture, and your resources. Yeah, I mean something that we do today, but we also plan to enhance that the ability. Like already today you can ask Kubiya the cost of your cloud resources for example, FinOps, the next level is actually the ability to provide graphs directly through your Chat tool and again provide a much more rich experience. Yeah, I mean those are just some of the things that are coming down the pipe.

[00:30:38] Joe Colantonio Awesome. Okay Shani, before we go, is there one piece of actual advice you can give to someone to help them with their AI DevOps efforts? And what's the best way to find contact you or learn more about Kubiya?

[00:30:48] Shani Shoham So I'll start with a can. The can is you can use Kubiya. Kubiya.ai, K-U-B-I-Y-A.ai or reach out to me shani@kubiya.ai. As I alluded to earlier today, I think the topic of AI is like everybody feels that there's some kind of disruption. To me, it's the same experiences, I don't know, 10-15 years ago on Cloud started. I read an article a few days ago that says that it is a disruption in the iPhone. So everybody's trying to figure out what to do around AI. I literally, I get on calls five, six times a day where people say, hey, we're investigating AI and they don't necessarily have a specific use case, but they feel that they need to do something. I think, I agree that it's disruptive or I think it's going to create a lot of new use cases. At the same time, I think there's still a lot to figure out, especially in terms of security and private information and also in terms of the context, I mean, a lot of the AI that we're seeing today is general purpose A.I., and I think the AI will really become much more powerful what you want. It has the content and the organizational context. And so a good example, I spoke to VP Engineering yesterday kind of the event that we had. They tried to use copilot and they said, it doesn't really save us a lot of time. And we didn't find it valuable. But if it could learn from our repos and provide recommendations that are specific to my organization, then that would be powerful. I think that part of what we're doing at Kubiya is getting the context of the organization, the namespaces, the architecture, the services, the buckets, etc.. So again, I think that's the point that A.I. is going to be much more powerful. And that's kind of what I hear from organizations when it comes to A.I. And again, we were the core for it. So we're definitely having a lot of these conversations day in and day out.

[00:33:01] And for links of everything of value, we covered in this DevOps toolchain show. Head on over to TestGuild.com/p115 and while you are there, make sure to click on the SmartBear link and learn all about Smartbear's, awesome solutions to give you the visibility you need to do the great software that's SmartBear.com. That's it for this episode of the DevOps Toolchain show, I'm Joe. My mission is to help you succeed in creating end-to-end full-stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers.

[00:33:35] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Brian Vallelunga TestGuild DevOps Toolchain

Centralized Secrets Management Without the Chaos with Brian Vallelunga

Posted on 09/25/2024

About this DevOps Toolchain Episode: Today, we're speaking with Brian Vallelunga, the founder ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Testing Castlevania, Playwright to Selenium Migration and More TGNS136

Posted on 09/23/2024

About This Episode: What game can teach testers to find edge cases and ...

Boris Arapovic TestGuild Automation Feature

Why Security Testing is an important skill for a QEs with Boris Arapovic

Posted on 09/22/2024

About This Episode: In this episode, we discuss what QE should know about ...