AI-Driven DevOps Data Security and Privacy with Amjad Afanah

By Test Guild
  • Share:
Join the Guild for FREE
A promotional poster for a TestGuild DevOps Toolchain event titled "AI-Driven DevOps: Enhancing Data Security and Privacy" featuring Amjad Afanah and supported by SmartBear.

About this DevOps Toolchain Episode:

Today, we cover shift-left strategies for sensitive data protection and privacy compliance. We'll also spotlight an AI-driven security solution called Hound Dog AI. The company's founder, Amjad Afanah, will join us. He brings a wealth of knowledge from his extensive background in cybersecurity.

In this episode, we'll explore how Hound Dog AI takes a proactive stance in preventing PII leaks and ensuring compliance with regulations like GDPR. Amjad will share insights on the different types of PII leaks, the importance of protecting sensitive data even in development phases, and how their solution seamlessly integrates with major CI pipelines.

We'll also discuss how this tool can significantly save your time and costs associated with remediating data leaks. Its high accuracy in detecting vulnerabilities, supported by advanced AI techniques, is a testament to its efficiency. Amjad underlines the importance of educating DevSecOps and preventive controls in data security.

Whether you're a security leader, a developer, or handling privacy concerns at your company, this episode is packed with valuable information. Learn how to try out Hound Dog's free scanner to safeguard your code.

Try out SmartBear's Bugsnag for free, today. No credit card required. https://links.testguild.com/bugsnag

TestGuild DevOps Toolchain Exclusive Sponsor

SmartBear’s BugSnag: Get real-time data on real-user experiences – really.

Latency is the silent killer of apps. It’s frustrating for the user, and under the radar for you. It’s easily overlooked by standard error monitoring. But now SmartBear's BugSnag, an all-in-one observability solution, has its own performance monitoring feature: Real User Monitoring.

It detects and reports real-user performance data – in real time – so you can rapidly identify lags. Plus gives you the context to fix them.

Try out SmartBear's Bugsnag for free, today. No credit card required.

About Amjad Afanah

Amjad Afanah is a serial entrepreneur with a rich background in cybersecurity. He led his first company, DCHQ, a cloud management startup, to acquisition, and later founded APISec.ai, which developed one of the first API security scanners. Before founding HoundDog.ai, Amjad served as the VP of Product at Cyral, a data security platform that implements security controls on production data. His experience at Cyral, coupled with significant feedback from security and privacy teams frustrated by the prevalent reactive approach to data security and privacy—which often remains unaligned with evolving codebases—inspired him to start HoundDog.ai.

Connect with Amjad Afanah

 

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Hey, today we're going to talking all about Shift Left Strategies for Sensitive Data Protection and Privacy Compliance. I found a new company that just came out of stealth called HoundDog AI. They featured on my News Show, and the founders, cool enough to join us to take a deeper dive into why this solution, and how it's going to help you with security. You don't want to miss this episode. If you don't know, Amjad is a serial entrepreneur with deep roots in Cybersecurity. He led DCHQ, which is a cloud management startup, acquisition and founded apisec.ai, pioneering AI, API security scanning. He really knows his stuff. But being frustrated with the reactive approach to data security, he recently launched HoundDog.AI to align security with evolving codebases. Like I said, I think it's a really great topic, something a lot of people don't know about, and how you need to really be thinking about this with your DevOps pipelines. You don't want to miss this episode. Check it out.

[00:01:15] Hey, if your app is slow, it could be worse than an error. It could be frustrating. And one thing I've learned over my 25 years in industry is that frustrated users don't last long. But since slow performance isn't sudden, it's hard for standard error monitoring tools to catch. That's why I think you should check out BugSnag, an all in one observability solution that has a way to automatically watch for these issues real user monitoring. It checks and reports real user performance data in real time so you can quickly identify lags. Plus, you can get the context of where the lags are and how to fix them. Don't rely on frustrated user feedback. Find out for yourself. Go to bugsnag.com and try for free. No credit card required. Check it out. Let me know what you think.

[00:02:09] Joe Colantonio Hey, I'm Amjad. Welcome to The Guild.

[00:02:13] Amjad Afanah Thanks so much for having me, Joe. Appreciate it.

[00:02:15] Joe Colantonio Awesome to have you. Like I said, I found you on my News Show. As I was going to LinkedIn, I noticed this company and I was like, oh, this sounds cool. So I guess before we get into I'm always curious to know about founders. It seems like you're a serial entrepreneur or founder. Why or what inspired you to create HoundDog.ai and how did you see your experience in data security to shape your vision for this platform?

[00:02:36] Amjad Afanah Yeah, I appreciate it, Joe. So like you said in the introduction, this is my third startup before founding HoundDog.ai. I was one of the co-founders of apisec.ai which was an API security scanning platform. And then, transitioned into doing different things after that. And right before I started HoundDog.ai was with a data security company. And like most data security vendors, I found that most DLP and data security platforms focus almost entirely on discovering data in production, whether it's structured or unstructured data. And then based on like whatever they discover, they help you kind of apply access controls by either asking you to kind of install a proxy to any connections or traffic going to the database, and that's how they are able to kind of protect the data in production. And while I was there, I started hearing a lot of feedback from various privacy and security teams where they were kind of the overwhelming feedback was it's great that we have these controls in production, but how can we prevent these PII, the leaks of personally identifiable information from day one, from the very start, so that we don't have to deal with the repercussions of cleaning up the logs, doing the code changes, reviewing the Who access the logs between those access, doing the log reviews, doing the risk assessments. In some instances, even like doing customer notifications depending on what kind of data they were discovered in production. And so they wanted a more proactive approach to stop it before the code is even merged and pushed to production. And then on the privacy side, that's like specifically for GDPR, right? It's often like kind of centered around documenting all the systems, all the applications that are handling user data. And you have to kind of document the data flows where the data is coming from while you're collecting it. Who are you sharing it with? That's what it really boils down to. And believe it or not, but most companies rely on like manual surveys. So they would set out service to like their business units, and they would cite somebody to say, what systems does this business unit use, what kind of user data are they collecting, etc.? And as you can imagine, it's like a very error prone manual workflow and it often doesn't keep up with the evolving code bases, especially for the internally built applications. So that privacy teams are often the ones that are the most blindsided by changes because they're a bit even further removed than security from what's happening in development.

[00:05:14] Joe Colantonio Yeah, I used to work for a health care company and every quarter something we had a list of all the third party libraries were using in our software. So it's something similar but never was right, always was a pain for sure. So this sounds like it helps you with certain aspects of it. I guess I like to keep it simple to start off with. Maybe we take a deeper dive if someone's listening and they're like, okay, what's a personal identifiable information PII leak, is that just Social Security numbers? Like, what is that all about?

[00:05:40] Amjad Afanah Yeah, yeah, they come in different categories. So even within PII, there's like account data. There's like usage data. There's stuff that is about your employment history. There's stuff about your education history. All of that is essentially like personally identifiable information where you could infer who the person is by looking at that data. But obviously, there's like companies and security or privacy teams usually applied very sensitivities to that data because not all data is alike. Like you collecting a phone number or an email is very different than you collecting Social Security or credit card number. And the most critical types of PII are actually like passwords and tokens, right? Because like if you leak that, that can be immediately used for hacking, but obviously like the less severe, the other like high sensitivity but less critical, maybe the stuff that I talked about, which is like Social Security credit card information, protected health information, things about like your medical history, your symptoms, your health diagnosis, etc. and then it goes down into like the more usual bread and butter PII like date of birth, address, and so forth. So that's kind of the high level kind of the explanation of what is PII. Yeah.

[00:07:02] Joe Colantonio All right. So once again, I used to work for a large company. And because it was regulated we had to use Masked Data not real data from customers. In development, when we're testing we were using this masked data. So how does this PII information that gets into production. If you're developing using like a lot of data that's not real data. Like how does it get there I guess?

[00:07:25] Amjad Afanah Yeah. A great question. What we look for is not the actual exposed data in your code base. So what we're looking for is the code logic that is about to handle sensitive data. So that's the exact distinction between what the SAS scanners can provide today. So most SAS scanners can help you detect expose secrets. If you have actual passwords or actual API keys and tokens in your code base, they would apply some kind of like regex pattern that will say, oh, okay, this matches this kind of password, or this kind of API token that is typically used with AWS, for example. Now we on the other hand are examining a different thing. We're examining the functions, the methods, the classes. We're looking at the names of these things to see if based on the name of the function, is it likely to handle Social Security or credit card number. And then we apply the abstract syntax tree and very advanced kind of analysis techniques like Interprocedural analysis and taint analysis to kind of basically follow whatever triggered a match. So like if we detected that this function is called SSN. So most likely it's handling Social Security, we'll track that variable across the code base and see if you are kind of invoking it inside the log. So you could do log.console insert SSN right. Or you could do saving to file and answer it SSN or even worse sent to third party systems so you can call like a Sentry or Datadog API and start sending that stuff there. And so that's essentially what we're doing. We're not looking at we don't care what kind of actual data you use when you're doing development and testing. We're looking at the actual code logic to see what is it about to handle when once it's actually deployed to production.

[00:09:17] Joe Colantonio Gotcha. So you can expose issues that a hacker may be able to normally penetrate before it gets into production. Am I understand it correctly.

[00:09:25] Amjad Afanah Right, right. Exactly. So basically the two main kind of use cases are A, that your developers make mistakes. They sometimes over log things. So sometimes they want to like debug something and they say oh okay. Like maybe if I kind of dump this variable, which has a whole bunch of information into the log, it will help us with debugging once it's deployed in production. So then in once the app is actually deployed in production, it will start logging. And and then like ten other tools that ingest those logs will start collecting that information. And all of a sudden security team that wants to kind of clean up those logs, that have like ten times more effort or resources that are needed to just do that cleanup task. And then it depends on what kind of data you're over logging. So obviously like the extreme case of, hey, we actually like logged customer passwords or customer password hints, which actually happened. A few years ago, there was like a kind of a breach associated with one of those big tech companies where the clear text password, the hints were kind of exposed that were publicly kind of available as part of that hat. And so in those kinds of cases, that triggers a whole bunch of like incident response and customer notification, and you have got to go through like a whole hoopla of trying to kind of comply with the whatever frameworks and make sure that you're paying the compliance points and all that stuff. We're helping kind of avoid all of that. But hey, developers make mistakes. Ours can often detect those mistakes. And then, the remediation is often like very simple, right. You either omit that data or to your point you mask it. Right. So if you apply a masking function to that data, we'll actually know, okay, this is fine to log because it's being masked when you're logging it, right. But if you're doing it in plain text, that's when we flag it as a vulnerability basically.

[00:11:25] Joe Colantonio I know a lot of developers from the create APIs or IoT, headless devices. A lot of times they don't think about the data because it's not a front end. And it seems like this would help with that as well because it's able to flag things before it actually gets out there in the wild. Sure.

[00:11:40] Amjad Afanah Yeah, absolutely. One of the features that we provide, in addition to just like flagging vulnerabilities, is the ability to do what we call sensitive data visualization. And so we will basically from the very first time you scan a code repository, we'll be able to map out all the sensitive data elements that are being processed in your code base. And then you can actually drill down and view a visualization of all the files that process it and the data six where it's actually ending up is it ended up in a different third party API, ending up in a log. Is it being stored in a database data warehouse? So obviously not everything triggers the vulnerability, but that visualization is also very helpful for you to kind of just get a snapshot of, okay, like we're processing Social Security and it's being stored safely in a database and is being encrypted and etc., etc. That visualization will help you kind of basically kind of, understand what's happening to your sensitive data.

[00:12:40] Joe Colantonio Cool. So if you're being audited, if you have a regulator that's coming in and they're like, oh, prove to me that this is, compliant or something like that. Can you use that report to show them, hey, look, here's what we've done. And here are the things that we can prove are either fits a law or is compliance. It's not we can use a data.

[00:12:57] Amjad Afanah Yeah, 100%. Yeah. I think the privacy compliance is the one that's most stringent. And it is kind of completely centered around data flow documentation which is like those documenting your processing activities. So that one is seems to be the most urgent when it comes to using a feature like this. But there could be other security audits, for example, like we went through SOC2. And I'll be honest, like SOC2 is a bit lenient. It doesn't necessarily like have clear cut criteria as to, hey, give us kind of a sensitive data map so that we understand what data you're collecting, it doesn't have a clear guideline like that. But obviously, if you submit it as part of that evidence gathering for the risk assessment control, it definitely can help make your case that, hey, this is the only data that we're collecting. We're not collecting anything else. And it can definitely be used these use cases as well.

[00:13:55] Joe Colantonio Nice. Every time I talk to security folks or developers when they have to deal with security, with all these scanners, they get a lot of false alerts. I know you mentioned this is a little bit different than a normal SAS kind of scanner. So how does it work so you don't get maybe false positives or how do you know it's really detecting data and that it's really true positive detection I guess?

[00:14:19] Amjad Afanah Right. Yeah. We've passed several months before we actually launched. We kind of focus primarily on that challenge, right. And so our scanner comes with predefined patterns. These are like regular expressions. I mentioned the Social Security number right. So we'll have a regex pattern that will say here are the different combinations of what a developer might call function or method that's about to handle Social Security right. With our predefined patterns you get about like 80 to 85% coverage. But then what we do is we also have an AI integration. So we integrate with OpenAI today. And what happens is that ours scanners collects all the toolkits, all the names of functions, variables, methods, etc. and anything that doesn't trigger a match against our patterns. We actually send off to OpenAI using a very finely tuned prompt that we've been working on for months. And in essence, we'll basically ask OpenAI, hey, based on the names of these things, which one do you think is handling sensitive data and give us back like a confidence score and we only return back the ones that have 100% confidence. And based on our testing, the accuracy has been upwards of 95%. So we have a number of customers that have kind of also attested to the high accuracy of that kind of workflow because we're all returning the very, very high confidence results. And so what that translates to is that you can use our scanner without AI. And like I said, it should cover the most common PII, PFI, which is the financial information protected health information stuff. But if you have something custom like let's say you're a government agency and you have like federal registration number or U.S contractor number, right? Those are like not common things that we would have out of box definitions. But OpenAI is actually smart enough to say, oh, I think this is actually sensitive and has been consistently returning those true positives for us. And so that's how we're leveraging the power of AI for that. Yeah.

[00:16:25] Joe Colantonio Does anyone ever get freaked out. Are you sharing this with OpenAI. Like is it now going to know like, hey, this is a method that I can let someone know this is could be a vulnerability to other people. I don't know if that makes sense, that you can always hack it backwards to just say, all right. It says it's a vulnerability but now it's listed somewhere. And so I could find out now name of functions that may not be obvious that could be hacked for personal data.

[00:16:47] Amjad Afanah Right. Right. Yeah. So yeah, what we said to OpenAI is just the names of functions, methods and all that stuff. So we don't actually send it the actual code. And yeah, basically like the reverse engineering thing that I mean, somebody would have to kind of intercept kind of your request going to OpenAI for it to kind of understand what that request is about. But the OpenAI model is not going to kind of save that information. It's just kind of making that assessment on the fly based on the naming of these things to tell you if it's handling sensitive data. So it's an evolving space. And we understand that a lot of companies may not be comfortable using OpenAI just yet. And so most of our customers start out without this AI integration, especially if they're in like kind of a standard kind of fintech, health tech, insurance tech type of space for which we have upwards of 90% coverage. But if you are in like a more kind of niche space and you have like your own unique set of type of sensitive data that you process, AI is going to be like a multiplying force for you because obviously, we allow you to add your own data definitions, but we don't want you to spend your time doing that. That's been like the Achilles heel of a lot of the DLP and data security platforms, where most customers end up being frustrated to kind of having to kind of define their own definitions, that defeats the whole purpose. If you already know what data definition, what it's actually called didn't like task of discovering it is you've already done that task already, right? We are trying to help you discover things that you may not know existed in your code base, or you don't have to know what function or the method that is handling that data is called. That's kind of the whole point of our scanner.

[00:18:42] Joe Colantonio Going back to my original thing, when I said we're using third party libraries. So most likely developers don't necessarily know what it's doing. Does this help you know that you're using a third party, a library that is it being compliant? Will it help point that out as well to say it's not us, but it's definitely this is not compliant. We can't use it or this is definitely leaking something.

[00:19:04] Amjad Afanah Great question. So I'll say instead of like third party library, I'll talk about like third party applications because that's the main thing that we're kind of addressing. I think the library is more like the software composition analysis type tools and all that stuff, and we're not covering tha, right?

[00:19:22] Joe Colantonio Gotcha.

[00:19:22] Amjad Afanah But the third party applications is exactly the problem we're trying to address, because both privacy and security teams alike are often blindsided by integrations that developers enable in the code base. This because they often need to debug things. They may need to send things to, like Sentry or Datadog or other tools that may help them. And oftentimes a lot of these teams may be blindsided by the fact that this integration even exists. And so, like I said, privacy teams are even farther removed from that. And they kind of rely on the people's knowledge. So they'll send a survey to the engineering team saying like, hey, just document all your integrations and whoever is filling out that survey, if they didn't talk to the engineer or enabled that integration, may be missing that one. And so that's the whole point of like relying on a source of truth that is actually the code base that's off. That's changing. And because you can get a snapshot of what the integrations look like today, but in a week or two weeks, depending on how frequently the codebase changes, that stuff could be changing. And there was like a recent survey by the state of data integrations or something like that that indicated that majority of companies are adding systems that handle user data on, like as frequently as a weekly basis. And so it's like almost impossible to keep up with all the things that are handling user data and just keep relying on manual methods like surveys and spreadsheets to kind of keep up with that. There needs to be a way to kind of plug it into your CI pipeline and have like the assurance that like even with frequent code changes, you're able to keep track of the data flows, the third party kind of integrations, as well as like PII leaks in logs and files and so forth.

[00:21:15] Joe Colantonio Nice. How hard is it to integrate to CI pipeline? What's the flow to get this up and running? Is it a long process, a simple process. How does it work?

[00:21:25] Amjad Afanah Yeah. Extremely simple. If you go to docs.hound.dog.ai, you'll see our documentation and we integrate with all the major CI pipelines GitHub, GitLab, Azure pipelines, Jenkins, CircleCI and so forth. And basically our scanner runs as a Docker container. So you basically create like a CI config, that invokes that Docker container. We can spit out the results in various formats. For GitLab and GitHub specifically, we can actually spit up the results in their security dashboards. You can view the vulnerabilities right there the GitHub security dashboard, the GitHub vulnerability report. And then we can also obviously send the results back to our cloud platform which allows you to kind of do more sophisticated workflows like file JIRA tickets or get notifications via slack email, and be able to have a consolidated view all the code repos in like one platform.

[00:22:24] Joe Colantonio Nice. You mentioned multiple times you've founded many companies. And so I assume you have a lot ideas. You looked at a lot of different venues, a lot of different things to get into. I'm just curious, do you see, like on the horizon, an increase in the need for data security with different privacy compliances that countries and states might be coming up with like and is that was that one of the driving factors maybe a little bit. But like where you see it going with the evolution of data security and privacy compliance and why people need to get on top for now if it all, basically.

[00:23:00] Amjad Afanah Right. Yeah. Exactly. Yeah. I'll just say that from the data security standpoint. That use case is all about like minimizing security risk, improving your security posture and most importantly like reducing costs because we have like an ROI calculator on our website. And it was based on extensive research that we've done with our existing customers and other kind of security leaders. It can take like at least 80 hours of resources to just remediate one leak in a one PII leak that is discovered in production. That's because it requires the code updates, access log reviews, the risk assessments. You have to kind of do the incident response and all that stuff. And so we estimate that, and for every 100,000 lives we scan, we find like at least ten vulnerabilities. And that's been consistent for all the customers found. Obviously, they're not all at the same severity. They're usually like medium low severity type of issues, but still like we're finding a lot of these vulnerabilities because this is the type of stuff that where you find PII hiding in logs and in third party systems as the most predominant use case examples. And so it's a pure cost cutting measure. For the privacy one, you're 100% right. But that Europe kind of consolidated towards GDPR and that's very established. There actually been fighting companies like crazy last year in 2023 the total GDPR fines were more than 2.2 billion upwards from I think .9 billion the year before. The GDPR is going after companies and finding them. And that's been like something that a lot of the companies that operate, they don't have to even operate in Europe. But if they have European users, that's a huge concern for them that they don't want to be fine right. In the U.S, to your point, is very much scattered. There's like state by state kind of regulations. But you're absolutely right that it's only a matter of time before the U.S comes kind of comes together with one kind of federal guideline similar to GDPR, but for the U.S when it comes to privacy regulations.

[00:25:20] Joe Colantonio I'm just a one person operation. I get contracts sometimes for different companies in Europe that say, am I GDPR, GDPR compliant? I would assume it affects everyone. How hard of a sale is this to someone? I'm just curious to know does everyone get it. Like could you go towards? Who's the perfect person to tell about the solution? If someone's listening they're like or I get it like how do I get buy in? Who should I tell first about this solution? I don't know if that makes sense.

[00:25:51] Amjad Afanah 100%. 100%. Yeah. We have two buyers. One is like the the CISO and like the VP's directors and security specifically appsec because even though we do data security, but our scanner looks and feels like an appsec product because it plugs, it's a code scanner ultimately it plugs into CI pipeline. And so they're the ones who will be able to kind of manage it and get up and running with it like the fastest. And then on the privacy side, it's more like the chief privacy officer of the various kind of directors, VIPs of privacy. And sometimes it's like legal, sometimes it's the GRC kind of types of departments with their companies. And yeah, in terms of who really gets it, it's the ones who have been like bitten by dealing with some kind of leak. Like from the security side is more like, oh, man. Yeah. We dealt with like a leak in logs and like, it was nasty and it, like wasted, I don't know how many days and like it, but we reduce the productivity of the engineering team etc., etc.. And then on the privacy side, it's actually much easier to kind of convey the message because the status quo is really bad. The status quo is really manual reliance on tribal knowledge on all that stuff. And to kind of give them an ability to rely on a more reliable source of truth, which is the code base that's evolving and changing. And they don't have to kind of rely on these services. They can like if we keep track of like the unpushed changes and be proactive about things that, for example, the privacy teams, want to have an input on. Right. So they want to make sure like if you're introducing something that is a high sensitivity, that it's being lawfully processed, that it's asking users for consent, and it's doing X, Y, and Z, and they don't want to kind of be blindsided by something that is already pushed production and try to kind of patch it after the fact. These are like the, I would say like the kind of the whole sales cycle for us.

[00:27:56] Joe Colantonio I should have asked this question earlier. You kind of touched on it. I check in code, the scanner runs, it finds an issue. Is the developer alerted right away? How long does it take? Is it a dashboard? And if so, do they even know how to fix it when they get alerted like that? Like, okay. Or like, does it give some suggestions how to resolve?

[00:28:14] Amjad Afanah Yeah, yeah. Great question. So it's really up to the the Appsec team that configured that workflow. So in the CI pipeline they can say anytime there's an issue of severity critical. Basically, I want to like feel the build. We often don't recommend that because you don't want to like slow down things. But that's an option. And then as far as like assigning issues, like I said, if you're using our cloud platform, you can use like the old JIRA ticket approach and you assign JIRA tickets. A lot of companies prefer that model. If you are like a GitHub GitLab kind of security dashboard user. So both GitHub and Git Lab have their own security dashboards where they can actually have a consolidated view of all the issues discovered, not just from our scanner, but all other code scanners. That's another way you can assign things to developers. And we provide like detailed remediation strategies. The good news is that the type of issues that we're finding is not hard to fix. It's more like like I said, you either omit the data, like don't log Social Security two logs, you don't need it for debugging, right? Like that's common sense. But let's say like all what we do need to collect this kind of PII like email or something, you could do that. You can like use UUID or like do some kind of masking. We don't want you to not be able to debug things, but we want to make sure that you're doing it safely. So we provide all these mitigation strategies in whatever issue we find.

[00:29:49] Joe Colantonio I have in my notes, I don't know if this is accurate. Is there a free scanner for early access customers if someone's listening to so like, at least want to try it first before I bring it up to my organization. Is that an option?

[00:30:00] Amjad Afanah 100% yes. So go to our website HoundDog.ai you can sign up for the free scanner. After you fill out the form will just like redirect you to the Docker Hub page and it just a Docker container. Basically you pulled a Docker container and you just like point to the directory that has the code you want to scan. You scan it, it outputs the results directly in the console. You can save it in a markdown file. And we support like JSON and other types of formats as well. And for the free scanner, we've limited the the features to being able to create like or generate that sensitive data map that allows you to see what all sensitive data flows exist in your code base. And for the paid version, that's where you need to kind of use the exact same scanner. But basically, add like an environment variable that plugs in our HoundDog API key that we'll give you as a paid user. And that will unlock other features like the vulnerability detection, the generation of record processing activity reports, the ability to kind of do all that stuff in a consolidated manner in the cloud platform. So that's kind of the distinction between the two.The free and the paid version. Yeah.

[00:31:11] Joe Colantonio Awesome. Okay, before we go, is there one piece of actual advice you can give to someone to help them with the DevOps security PII efforts, and what's the best way to find contact you and learn more about HoundDog.AI?

[00:31:24] Amjad Afanah Yeah. So in terms of like best practices, I'll be actually hosting a webinar in the coming weeks. I'll be talking about like both preventive and detective control. We are more on the detective controls where we help to detect these things as you kind of develop. But I think on the preventive controls, I think, training is the most important thing. Educating developers and there's a lot of security awareness trainings, but and oftentimes, developers are overwhelmed by the amount of information that's being thrown at them. But this is one of the things that is like just easy to kind of communicate because the developers may not know that what they're doing is has like major repercussions. But oftentimes just like omitting or masking goes a long way in terms of preventing a lot of the repercussions of that. And yeah, people can reach out to me at Amjad.HoundDog.ai any time and, I'm always here to answer any question.

[00:32:30] Remember, latency is the silent killer of your app. Don't rely on frustrated user feedback. You can know exactly what's happening and how to fix it with BugSnag from SmartBear. See it for yourself. Go to BugSnag.com and try it for free. No credit card is required. Check it out. Let me know what you think.

[00:32:51] And for links of everything of value we covered in this DevOps Toolchain Show. Head on over to Testguild.com/p151. And while you're there make sure to click on the Smart Bear link and learn all about Smart Bear's awesome solutions to give you the visibility you need to deliver great software that's Smartbear.com. That's it for this episode of the DevOps Toolchain Show. I'm Joe. My mission is to help you succeed in creating end-to-end full-stack DevOps Toolchain Awesomeness. As always, test everything and keep the good. Cheers.

[00:33:24] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Alex Kearns TestGuild DevOps Toolchain

Leveraging GenAI to Accelerate Cloud Migration with Alex Kearns

Posted on 12/18/2024

About this DevOps Toolchain Episode: Today, we're diving deep into how you can ...

Three people are pictured on a graphic titled "AI Secrets You Should Know." Set against a striking red background, the image features the ZAPTALK logo in the top left corner, highlighting discussions on AI and automation.

The Secret to Embracing AI and Automation (ZAPTALK EP 02)

Posted on 12/17/2024

About Episode Join Alex (ZAP) Chernyak, Joe Colantonio, and David Moses in episode ...

A person is speaking into a microphone on the "TestGuild News Show" with topics including weekly DevOps, automation, performance, and security testing. "Breaking News" is highlighted at the bottom.

Top Gift For Testers, 70% Problem, Test Coverage and More TGNS144

Posted on 12/16/2024

About This Episode: Do you know the perfect Holiday gift to give that ...