About This Episode:
AI is accelerating software delivery, but it’s also introducing new security risks that most developers and automation engineers never see coming.
In this episode, we explore how AI-generated code can embed vulnerabilities by default, how “vibe coding” is reshaping developer workflows, and what teams must do to secure their pipelines before bad code reaches production.
You’ll learn how to prompt more securely, how guardrails can stop vulnerabilities at generation time, how to prioritize real risks instead of false positives, and how AI can be used to protect your applications just as effectively as attackers use it to exploit them.
Whether you’re using Cursor, Copilot, Playwright MCP, or any AI tool in your automation workflow, this conversation gives you a clear roadmap for staying ahead of AI-driven vulnerabilities — without slowing down delivery.
Featuring Sarit Tager, VP of Product for Application Security at Palo Alto Networks, who reveals real-world insights on securing AI-generated code, understanding modern attack surfaces, and creating a future-proof DevSecOps strategy.
Exclusive Sponsor
** Join the TestGuild — where 40,000+ automation engineers level up their skills.**
If you’re serious about test automation, this is where you’ll find the tools, techniques, and insights that help you ship better software with less stress. The Guild brings together practitioners who openly share what’s working, what’s not, and which solutions are worth your time.
Join the community: https://testgld.link/joinguild
If you build tools or services for automation engineers, the Guild is also where you can connect with the exact audience that cares about what you offer. We partner with vendors who want meaningful visibility, qualified leads, and a trusted way to get their message in front of decision-makers.
Explore sponsorship and awareness packages: https://testgld.link/guildcall
About Sarit Tager

Sarit Tager, VP of Product Management at Palo Alto Networks, leads the code and application security product management team for Cortex Cloud. With prior leadership roles at Check Point, JFrog Security, and Vdoo, Sarit has a proven track record of driving product strategy, enhancing customer engagement, and delivering cutting-edge security products.
Connect with Sarit Tager
-
- Company: www.paloaltonetworks.com
- LinkedIn: sarittager
Rate and Review TestGuild
Thanks again for listening to the show. If it has helped you in any way, shape, or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
[00:00:02] In a land of testers, far and wide they journeyed. Seeking answers, seeking skills, seeking a better way. Through the hills they wandered, through treacherous terrain. But then they heard a tale, a podcast they had to obey. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold. Oh, the Test Guild Automation Testing podcast. Guiding testers with automation awesomeness. From ancient realms to modern days, they lead the way. Oh, the Test Guild Automation Testing podcast. Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
[00:00:35] Hey, today's episode goes deep into something I think is becoming unavoidable for automation engineers. And that is how AI generated code creates new risk and with that, new responsibilities. Whether using Cursor, Playwright MCP, co-pilot, or internal AI agents, the way you build, test, and produce code is changing fast. And that's why I'm really excited to have Sarit Tiger joining us from Palo Alto Networks, helping break down what testers and automation engineers need to know. Things like how vulnerabilities get introduced and how you can use Guardrails to protect you before the code ever hits your repo. If you're building automation frameworks, using AI to generate code, or working in pipelines that ship fast, this episode is packed with insights that directly affect your day to day work. You don't want to miss it. Listen up.
[00:01:22] Joe Colantonio Hey Sarit, welcome to The Guild.
[00:01:23] Sarit Tager Hey Joe, nice to be here.
[00:01:28] Joe Colantonio Awesome to have you. Well, I'm really excited to have you on the show. You have a lot of experience and you work for a really cool company. So I think before we get into the topic, I think we're gonna cover vibe coding, security, all those things that are really happening right now. Maybe a little background. How did you become like the VP of product at Palo Alto Networks?
[00:01:47] Sarit Tager I will tell about a bit about my product and a fun fact where I started from. I'm the VP product for application security within Palo Alto as part of the Cortex Cloud family. And we kind of use everything from code scanning to pipelines to ASPM to application security, posture management. Super exciting subjects and very informing one. A lot of AI is in place. The fun fact is that I came to this by being a developer first and then a VP engineering. And I always say that I came to the application security and the cloud security area in a mission to make it better for developers and for practitioners and for security practitioners. In generally, I do remember a lot of nights sitting and trying to fix things, security issues as a developer and it's part of what made me become a product manager in this area. And yes, I've been working for like a variety of companies from network side of things to cybersecurity of images and even there was more like JFrog. So a lot of companies interesting places.
[00:03:04] Joe Colantonio Love it, love it. One of the topics you said you wanted to cover was vibe coding. I guess people that know what is vibe coding is a lot of people think it's a joke word or is this a legit thing? Well, how would you explain what vibe coding is for someone that might not even know?
[00:03:19] Sarit Tager I will explain that this is something that as opposed to when we used to call, go to Stack Overflow and kind of check how to do things, you now have two things. One will be a chat that is actually helping you generate code and the one that actually generates the code. And if you think about it, it's kind of changed the way you define a developer. Who is a developer? If I just write a prompt saying I want an application that I know color things in blue, is it a developer or maybe more a product manager trying to do requirements? I think vibe coding in general really change the way we think about developers of their day-to-day job. And I think the most important stuff is that if I'm using a lot of AI generated code, then the question will be who is the one that needs to fix it from vulnerability perspective. It creates vulnerabilities because it was created by people and hence all most of the examples they see within the wild are with vulnerable code. And then the question will come if I do scan it and I didn't write it, or at least I wrote the prompt to write it, can I fix it? Can I as a developer fix it? And I think this is really changing the way we should think about security for developers and how they really need to fix their code or actually help them secure the code.
[00:04:49] Joe Colantonio You just mentioned something that's interesting. I was just gonna say I've been using Cursor a lot and even small apps. And then I go, Okay, check this for security issues and it finds a bunch of security issues after it wrote me the code, which I always think it's weird. You said it, One reason it's been trained on a lot of code and a lot of those code had no security issues, so how do you get around that? Is that always gonna be an issue when someone's doing vibe coding?
[00:05:12] Sarit Tager So I think the way to do it is to make sure it's been generated secure from the start. Trying to fix it after the fact is actually something it's much more complicated. If you secure it, if you put the right Guardrails in place, if you put the right rules to be checked before that, then the code will be secure by generation and not just after the fact. And also what I would I would like to say is make me an application that doesn't have any critical vulnerabilities and not just create an application to do something. Instead of after the fact, change it to be before you actually create a code.
[00:05:55] Joe Colantonio It actually could be a prompt issue then. If you prompt it correctly from the start, then you're gonna bake in security practices, I guess.
[00:06:05] Sarit Tager Yes, but if you think about it, there are a lot of different things. For example, I may become kind of a loop generation of code. I try to do something which is not vulnerable, but what I'm asking is actually has like vulnerabilities. So you have to be very careful in the way you define the prompt. What is outdates the difference between perfect and secure enough. And I think this is where the security companies come into place and say, we know what is secure application and we will help you bring in the rules to your AI generated code. If we just say, write something with own vulnerabilities, I don't think I know any open source that doesn't have any vulnerabilities. Maybe something that was never used or never really in production.
[00:06:54] Joe Colantonio Right, right, absolutely. Could this give people a false sense of security then? If they said, Oh, I told it from the beginning to write code without vulnerabilities, so I didn't have to worry about it at this point.
[00:07:06] Sarit Tager Yes, it may. And I wouldn't say it will completely remove all the tests that are being done later in the process. For example, for a pull request, for periodic scanning, for build, it will not kind of say, okay, we'll do it only on the IDE stuff and we'll do it only with developer write the code. Another thing that can happen, and in reality it's something that actually, if you think about it, I will ask, create an application or create a code which is not vulnerable. They will try to do that, but then what is actually vulnerable? I have like a lot of questions. Is the CVE is really vulnerable if I have it within my open source that I got it? It may not be reachable. I think it's kind of a balance between understanding which prompt you had to do and actually have a security that's in brain behind it. To understand what need to be fixed, I wouldn't say that just a prompt and saying create something which is not vulnerable would be enough.
[00:08:11] Joe Colantonio Before AI really came on the scene, it was more like you could have some tooling that as you're actually writing the code, it can like prompt you about certain suggestions, how to make it more secure. But now that AI is writing it, you kind of lose out on that. Working for a security company, do you now like do you work with the AI behind the scenes to make it more secure? Like, do you have any solutions to help with that type of scenario now?
[00:08:33] Sarit Tager Yes, so the idea is that, first, we have a eye within our product for a lot of things, but we also change the way we think about AI generated code. AI generated code when you go into the periodic stuff, like when you scan the branch, it looks the same. The main issue is the developer experience, which is different because you cannot ask the developer to fix things he didn't actually created. And what we're thinking of are kind of a level of security. One will be kind of a rules using our scanners today. Then some enforcement going forward. There is also another attack surface coming from these IDs with all this MCP. Think about the fact that an MCP can just ask you to delete all the files in your computer or that it can actually send all the code to an external source. So it's not just about secure coding, it's about the fact that you now have an autonomous agent that actually can do a lot of things on your environment.
[00:09:39] Joe Colantonio Oh, that's interesting. Yeah, so almost your attack service has increased. How do you then know that, like how do you handle it? Because it seems like AI can then stay one step ahead of your secure praxis, or maybe that's not true. I don't know.
[00:09:53] Sarit Tager So I would say that both attackers and cybersecurity companies will use AI to make them entrain about the future attacks. For IDE, at least for the vibe coding, it's a new attack service. Yes, people could have done mistakes in the past, but it's different than a malicious code coming, where I'm just writing a Jira ticket and the Jira ticket is saying, and then the Jira Ticket creates the code, and then Jira Ticket can say just send everything to one, a malicious website or something like that. One of our the option will be an agent or any .... within your environment. Other option having capabilities around the IDE. So there are a lot of cool stuff that we are thinking about.
[00:10:43] Joe Colantonio Nice. Do you have any real world studies or maybe how attacks may have increased with AI or how using tools to help you make your code more secure using AI has helped.
[00:10:57] Sarit Tager You see kind of different things about attacks, different examples, but I don't see this as something that we can say it's widely spread because I think it's not yet being used as often. If you think about the big companies, they are still struggling with understanding how exactly to use Cursor or Cursor like components. From a security perspective, I mean.
[00:11:26] Joe Colantonio OWASP has been around for a while. I'm always shocked how many developers and testers unaware of OWASP. And you would think people would be like security would be top of mind. Has AI actually helped the security space making people more top of mind? Is it still almost like an education process that people need to say tell the management, Hey, we need to watch out before we ship this?
[00:11:48] Sarit Tager I think it's more about making it easier. I'm not sure education will be enough. Because at the end, if it's easier to write the code, it should be easier to write it securely. It shouldn't be something that you educate you to write security about. You can still put a Guardrails in place for all cases and I will get to our ASPM Solution later on to explain. Generally speaking, I really think the product needs to be easier and much more automatic rather than ask developer to write securely or ask them to scan the code. It needs to be much simpler, it needs to produce less false positive and be kind of adopted by simplicity rather than education.
[00:12:41] Joe Colantonio That's absolutely true. I speak with a lot of security folks and they always say a lot of teams are worried that more security controls are gonna slow them down and companies that say we need to ship quicker and faster. So how do you balance security with trying to deliver at velocity almost?
[00:12:59] Sarit Tager This is a great question. Kind of brings me into the application security, the ASPM part that we are doing. One of the things that I think it's a perception. That says we will not stop developers from creating application into production because we will kind of delay or slow the velocity of the business value coming to customers. Ends up being is that problems of vulnerabilities get into production and then you need a developer to fix them. And then it's kind of starting a vicious cycle, I would say. You have to figure out who is the one that actually created the code. That easy to do that once you are in production. You need to figure out like which production issue is actually mapped to a code issue. You want to understand, for example, how many production issues were created from the same code issue. Then usually you need to figure out, okay, this owner of the code probably already walked it on something else. So you kind of bringing them back to solving things they did like a few months ago or maybe a few weeks ago. And then they need to fix it. We need to build the software, and then the software needs to be deployed. So in the end, you spend a lot of developer time just because we didn't want to stop them or kind of put the Guardrails in place in the first phases of the software life cycle. And one of the things we focus on our application security solution, besides other things I will get to this in a second, is that we will provide a smart Guardrails. One of the things you'll usually kind of hear from customers, they're saying, I cannot stop any critical CVE. This means that all my builds will be break or all my PRs will be break. I do want to do something which is much more smart in a way. Let's do it only for things that actually go to production. Let's do it for things that actually have an internet access. Let's see if this the work that I'm deploying has an excessive permission, something that will give them more information whether this can really be exploited. The entire say our perception around it or the motivation we did for the application security part was let's bring in all the information we have, whether this is the business criticality for the user, whether it's the code context, whether it's the cloud context, and make them a super smart policies to be applied. From one hand, you will be secure while you get production, and on the other hand, it won't block your entire velocity or delay it to not be able to bring business values into your environment.
[00:15:50] Joe Colantonio That's interesting. You talked about context. Every business is different. And you said, depending on what business you're in, something you may require more security than others. How do you learn about the context? Do you train in on like the requirements? Do you like do they almost have their own personal, not model, but personal kind of training? So it does have the context to say, yeah, this is could be a security issue, but in this case, don't have to worry about it.
[00:16:17] Sarit Tager We have like three levels. One will be the code itself. We can know a lot about the code because it's scanning the entire code and you can understand whether this code was ever been shipped into production. The second one will be about the cloud because we have such an extensive cloud security solution, we actually have, I would call it signals within the cloud, whether it's on the network side of things, whether it's on the data side of things, identity, workloads. We kind of see everything and we can actually understand how your production environment looks like. And then this context is we kind of bring it back to the application security and defining on the code and say, we saw in production that this workload is really highly permissive. Maybe we should be more careful about the things that go into this workload. And the last thing will be about the business criticality of things. Every organization have different ways they want to look at their applications. This usually is something that we need to get from external sources. And the last thing, and this is a super exciting thing that I want to share. As you know, our Cortex Cloud Solution is built on our Cortex platform. And our platform also incorporates all our XAM and XDR and XO, which is all the SOC and aged capabilities we have from the real attacks. While you think about it, another context, which is super important, is what is actually happening in production. Can I say something about the attacks that I see on my production environment? Can I prioritize based on them? Having all these signals into our environment, everything on the same data lake, and then bringing back to the developers and to the DevOps and say, you need to fix it because these are the things, and then also provide a remediation option and potentially an automatic, remediation, this will make their life easier. And then if it doesn't cost them anything, they would just apply it. And this is the reason we really believe that if you meet the developers at their systems, if you bring all the context into their environments, they will I wouldn't say it's an education, it's more about just simplifying the process. Make them part of it, part of the success. Have like have a production with I wouldn't say zero issues, but a very small amount of issues, it's a victory for everyone because nobody will be get up at night for a security issue that was just found in production, which is obviously much more urgent than anything will be found in code.
[00:19:06] Joe Colantonio Yeah, absolutely. And I guess maybe this isn't right either, but because you're using a product like yours, a real product that knows about other attacks that are happening. Maybe they're newer attacks that are just ha started happening in other companies and someone a security agent maybe needs to stay on top of, Oh, we I better check my code for that. Does this also give them insights? Hey, here's a new vulnerability we found going out there. And by the way, you have it. So you might get on it now. Is that possible?
[00:19:34] Sarit Tager Yeah, so the models we are training for attacks are always in continuous updating. We have this information from the I would say the amount of data we have, not just from a specific user. And then the idea is really to get them back to the developer and say we saw this attack or we saw something new that you see and we want you to fix it. And again, not fix it, create the code correctly. I will give you one example which is very common. Developers like to I would say choose an open source component that do what they need. And these components are not necessarily something you can trust. If you think about an agent that just tried to generate a code based on OSS, he will probably use the one he saw that is being used a lot, but it doesn't mean that it's a secure one. If this will be flagged out within the IDE, then no issue. The code will not be created. You will create a code with a different component, and then you will continue to do your regular coding. If this will not be flagged within the IDE, then you're already stuck with an API and an OSS component that you already chosen, and then you have to figure out how you patch it or maybe upgrade it or something like that. So early prevention, which is something we really believe in, is important for everyone. Just not to spend time on things that you could have solved if you planned correctly.
[00:21:13] Joe Colantonio Nice. We've been talking about as developers and security, but I know security there's a lot of trained professionals just in security alone. Is there such a thing as like vibe security where someone just now it's can use plain English to do security prompting rather than have to go through all the certifications and everything that a lot of them are required to have?
[00:21:32] Sarit Tager Think about like simple tasks. I'm security practitioner. I can be knocked out security practitioners and I need to know I'm responsible for an application and I want to understand what are the top items for me. And this is something that AI can help a lot just by going with natural language and ask what are my top issues to fix? The second thing will be, what are the things I need to do to be able to prevent first? And it's not just fixing things, but also making sure I'm kind of lowering the funnel and making sure not no more issues are coming into the funnel. I'm a great believer in natural language and using AI for I would say a lot of tasks. I also believe it will change the way we think about products and how products are being built.
[00:22:27] Joe Colantonio You mentioned production a lot. Does that highlight areas like, this isn't necessarily a vulnerability insecurity, but we notice this module is being used a lot. So you may want to add extra security measures around it.
[00:22:40] Sarit Tager One of the things that we do is that we have a scanning of the models and we have full inventory of the models we have in production. And we also have these different signals within your pipelines and within other places within the environment to figure out if there are models that were deployed, although you don't have the permission to do that. Yes, AI is super helping, but it's also something you need to figure out if you understand exactly what is being deployed, if you understand exactly what is being used. We kind of try to cover everything from code to the runtime and to the SOC.
[00:23:20] Joe Colantonio You did mention Guardrails earlier. Like are there can you like list out some maybe Guardrails that could be put into place to help make more secure code with AI?
[00:23:31] Sarit Tager I gave an example of a bad Guardrails. Just block any critical CV that you are trying to submit or every critical and high. A good Guardrail we say please make sure that these CVEs are really reachable in code. They are on business critical application and being deployed to production. And again, I would say that if I was naive, I would say that every vulnerability within the code will be fixed. But we know that companies have their own targets, goals. They have to bring business values into the environment and we are here to help them secure the things that are important. I wish they had time to do everything, but I'm more realistic than this and hence we have to figure out how to help them really fix the things that matter.
[00:24:28] Joe Colantonio Over time, do you foresee security no longer being an issue? Cause AI will then be handling all the security for you? I mean I don't know, that's probably outrageous, but any thoughts on the future of it?
[00:24:41] Sarit Tager I would like to say it will be simpler. I'm not sure it will not be an issue, but I think in my vision it needs to be much more simple, much more automatic in a sense of how to remediate things. And it has to be incorporated within the process in a very smooth way. It cannot be something that will be done after the fact. But it has to be as part of your day-to-day job. Whether it still can be a burden of trying to fix trying to scan a code, yes, code will be probably easier to be created without any rules, but We are living in a place with rules so I would assume it will apply to code as well.
[00:25:28] Joe Colantonio You have a lot of products at the Pal Alto Network. I'm just curious to know if a company's just starting their security journey and they may feel overwhelmed like, is there stages you recommend going implementing this? So that's they're now like throw it all at the wall and all of a sudden like they have to have all these false positive they need to work through like how does that work?
[00:25:48] Sarit Tager Yes. definitely, especially on application security, because usually companies have a lot of different problems. There are some things like we added, which is super cool. We call it kind of stop the bleeding. You only fix things that are new. At least you understand you have other issues and you have a huge backlog or technical depth that you have to fix. But the first thing you are doing you are trying to block all the new stuff that are coming in. Trying to do it in a more graduate, kind of more in steps or in phases rather than just say, Okay, block everything. This is one example. Another example put in some goals to group saying you have to fix like fifteen percent of your backlog by this quarter. Another thing will be about the different scanners secrets we have the option for example to validate secrets. We can understand whether secret was is actually being used and this kind of give another level of prioritization. So the idea is really to figure out how we can help them prioritize the work and understand what are the mediation they need to do. And the second thing, and this is the most important part, is not to create an issue in the first place. And I think my example on choosing the one open source component is a very important in the sense of not getting into a place you have to fix a lot of critical vulnerabilities.
[00:27:23] Joe Colantonio Great. Heard a lot of-do you hear a lot of things that maybe aren't true about AI generated code from a security perspective? You're like, Oh, why does everyone think this? If they only didn't think this, it'd be so much better.
[00:27:35] Sarit Tager I think AI is here. It's not a question for us. It's here. All of us are using like chats and we do a lot of things with we using either LLMs or AI or agent in general. I think this is the new way things should be done. I think that if code can be generated by AI, then we just need to join this one and not say it's vulnerable or it's not going to be in the same quality of things. I do believe it's something that's already here. Yes, there are some concerns about the type of code and we do see that some AI models tend to do some I would say things that are not deleting code from production or doing things that are just not as we want this to be behaving, but it's here. We just need to make sure we do it securely. I don't think we need to fight the new stuff coming in, but rather embrace it and make sure we kind of use it for our advantage.
[00:28:47] Joe Colantonio That's a good point. I know a lot of developers and testers that say it's just hype. This is gonna be an AI bubble. This is all gonna go away. But I know that for security, a lot of people are using AI and they are making it easier to do hacks, I think. That type of thing it sounds like it would leave people open to more vulnerabilities if they don't start embracing AI and realize it is here, I guess.
[00:29:12] Sarit Tager Yeah, but we can use AI to protect as well.
[00:29:14] Joe Colantonio To protect us. Right, right, right.
[00:29:19] Sarit Tager It's kind of a race for protection and for and exploitation in the same way. I believe the protection stuff will be winning, kind of the security will win here.
[00:29:32] Joe Colantonio Awesome. All right, before we go, is there one piece of actual advice you can give to someone to help them with their DevSecOps efforts or security efforts? What's the best way to find contact to or learn more about Palo Alto networks?
[00:29:45] Sarit Tager Of course in our site for Palo Alto, Look for ASPM super cool announcement and product that we are having with kind of go through our entire ecosystem as a platform, making sure the developers has their own way of using the platform and also application security. One of the biggest challenges will be trying to kind of bridge the gap between application security and developers. And the second thing I would say, application security is a journey. It's not in one time. You have to kind of set your KPIs and see how you are progressing with them. And we are here to help and kind of help with explaining what is top important and then what are the things that can be fixed later on.
[00:30:31] Joe Colantonio Absolutely. And we'll have links to all this awesomeness down below.
[00:30:35] Thanks again for your automation awesomeness. The links of everything we value we covered in this episode. Head in over to testguild.com/a455. And if the show has helped you in any way, why not rate it and review it in iTunes? Reviews really help in the rankings of the show and I read each and every one of them. So that's it for this episode of the Test Guild Automation Podcast. I'm Joe, my mission is to help you succeed with creating end-to-end, full-stack automation awesomeness. As always, test everything and keep the good. Cheers.
[00:31:10] Hey, thank you for tuning in. It's incredible to connect with close to 400,000 followers across all our platforms and over 40,000 email subscribers who are at the forefront of automation, testing, and DevOps. If you haven't yet, join our vibrant community at TestGuild.com where you become part of our elite circle driving innovation, software testing, and automation. And if you're a tool provider or have a service looking to empower our guild with solutions that elevate skills and tackle real world challenges, we're excited to collaborate. Visit TestGuild.info to explore how we can create transformative experiences together. Let's push the boundaries of what we can achieve.
[00:31:53] Oh, the Test Guild Automation Testing podcast. With lutes and lyres, the bards began their song. A tune of knowledge, a melody of code. Through the air it spread, like wildfire through the land. Guiding testers, showing them the secrets to behold.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.




