API Security Testing In DevOps with Oli Moradov

By Test Guild
  • Share:
Join the Guild for FREE
Oli moradov TestGuild Security Feature

About this Episode:

How do you integrate API security testing into the development process? In this episode, Oli Moradov, VP of Dev and Strategic Alliances at NeuraLegion, shares ways that you can achieve API security testing automation directly into your DevOps or CI/CD pipelines. Discover how you can test every build without causing development drag. Listen up!

TestGuild Security Testing Exclusive Sponsor

Micro Focus Fortify is the recognized market leader in application security and is the most comprehensive and scalable application security solution that works with your current development tools and processes. Try it today

About Oli Moradov

Oli Moradov headshot

Connect with Oli Moradov

Rate and Review TestGuild Security Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

Full Interview with Oli on DevOps API Security Testing

Joe [00:01:19] Hey, Oli! Welcome to the Guild.

Oli [00:01:23] Thank you for having me, Joe. Good to be here.

Joe [00:01:24] Great. I guess before we get into the topic itself, could you just tell us a little bit more about yourself?

Oli [00:01:29] Yeah, sure. I'm VP of dev and strategic alliances here at NeuraLegion, assisting both our end clients and indeed our DevOps consulting partners on their journey at delivering AppSec testing automation across their pipelines. So I suppose a little bit of about NeuraLegion itself. We're really a company that provides a developer-focused fuzzer and DAST solution, enabling runtime AppSec compliance on every build. So I suppose similar to the way that Sneak may have revolutionized the SCA or perhaps SonarQube has transformed the DAST, we really tried to simplify that process and making it work for developers, empowering them to an organization to actually incorporate, you know, automated, accurate DAST and fuzzing technology into every pull requests that they can resolve security concerns as part of their development process.

Joe [00:02:29] Perfect. So Oli my audience, I think, is mainly beginners. I could be wrong. So I just want to make sure everyone understands what some of this means. So could you explain what is a DAST and what is a fuzzer before we dive into that a little bit more?

Oli [00:02:39] Yeah, absolutely. So DAST is a dynamic application in security testing. So it's almost like an automated black-box test where you can run comprehensive security tests against your applications to look for security issues or security bugs so that obviously they're not being pushed out into production to ensure your cybersecurity posture. And obviously, the earlier that you can do this in the process, the better. Ideally, organizations should really be striving to incorporate a DAST solution into their development pipelines to be secure by design and actually to be able to fix and remediate these issues as early as possible in order to be secure as I mentioned.

Joe [00:03:25] Perfect. So, you know, I know there's a push towards getting things shift left and more and more developers get involved in testing the software. But it seems to be mostly done on like the web front end type of activities. And I know API has been around for a long time now. And for some reason, I still see that a lot of maybe I'm wrong, a lot of companies like developers still aren't really focusing in on API or API testing. Do you know why that might be?

Oli [00:03:52] Well, I think that when you look at the DAST markets and, you know, it's a very well-established technology, but APIs are relatively new in comparison to web apps. And the way that these applications are being now architected is shifting. And it's a very, very heavy reliance on APIs. And I think that's where there is a massive gap, is that typically these automated tools just didn't really have the ability to test APIs or perhaps looking at single-page applications that rely obviously very heavily on API codes. And it's that sort of left the market with a heavy reliance on manual testing. And, you know, I think that perhaps is the main issue and one that really, really needs to be addressed, particularly when you're looking at DevOps and when you're looking at DevSecOps, where you need that automation and you need to remove the manual human bottlenecks that's synonymous with not only an application security testing but specifically in relation to API security testing too.

Joe [00:05:06] Absolutely. And I guess you mentioned two reasons why that seems to be more prevalent now than before APIs, microservices, and single-page applications. Is that what you see also as a vendor? I love talking to vendors because you speak to a bunch of different clients and customers. Are more and more companies building applications that really have a foundation, a microservice, a single page application? And maybe that's why we're seeing more and more interest, I guess, in API testing.

Oli [00:05:30] Yeah, absolutely. I mean, we're saying there has been a massive shift, but now we're really seeing digital transformation, business transformation really, really leading the way. I mean, when you look at financial services, banking with all the sort of the start ups that are being over the top banking, when you look at the open banking directive, for example, there's a lot…there are many ways that organizations can now interact with each other. And API is obviously driving that as well. Microservices, absolutely. With organizations rearchitecting their legacy applications, migrating them into the cloud as part of that digital transformation projects, you're seeing more and more heavy reliance on API. So it's a much more effective way of being able to support the Agile development when you have multiple different microservices that you can work on individually and scale your development processes with that methodology.

Joe [00:06:35] Perfect. I think this may be is just what I see as I think there's a misperception almost about API security in that when you see a Web app, you can see it and there's a user interface. And you know what the hacker probably can attack it this way. But for some reason, I think maybe we get a false sense of security when we create headless type technology because we think, oh, no one's gonna be able to see this API or know how to interact. Is that a common misconception? Is it like how common is API security breaches?

Oli [00:06:59] Well, API security breaches are common. And we've seen some very, very well-established, renowned organizations have what have been some pretty severe breaches or where they've been lucky, some API breaches that perhaps have been detected with bug bounties, et cetera. The rise of the API really has been on…the writing's been on the wall for a very, very long time. Gartner very very recently did multiple surveys, but they say that actually, it's 90 percent of web-enabled applications will have more of an attack surface area in the form of exposed APIs rather than user interfaces. And that's only going to grow exponentially as more and more organizations are pushing forward with their digital transformation. And we rely very, very heavily on microservices-based architectures and subsequently, obviously, their heavy reliance on APIs whether that's still using the old SOAP API. Of course, the vast majority of organizations will be pushing towards a REST API. But we're also seeing a very marked increase in the number of organizations that are going to be relying on GraphQL. And I think that's a very, very interesting API methodology that I think we'll see grow exponentially over the next few years. And we've certainly seen an increase in demand for GraphQL testing over recent months.

Joe [00:08:27] So this may be a dumb question. Is there one method that's less secure than others, like, say, SOAP or REST or GraphQL are they equally can be attacked? Or is one more prone to attacks or security hacks?

Joe [00:08:40] Well, I think looking at security in that way is actually the wrong way of looking at it. Everything has the ability to be hacked unless you're putting the right efforts in the right places. Ultimately, every organization needs to take security testing very, very seriously. And in order to do that, in order to keep up with the Agile development methods with DevOps, it's all really about automation. And we're seeing so, so many times when I'm speaking to clients, whether that's younger startups or indeed enterprise organizations with hundreds, if not thousands of developers, there's a heavy reliance on manual testing. And when you look at the number of iterations, the organizations, the number of concurrent build that organizations have, it's very, very hard to maintain a good security posture regardless if one is going to be more secure than the other, because the technical debt that accumulates without the ability to automate that process is unsurmountable. And that's where you have your issues. Everyone talks about whether it's right or wrong. The gulf in terms of the job space in the cybersecurity industry, there are a million cybersecurity jobs I think the last count that I checked in the US alone I believe. In order to keep up automation really, really is key in order to achieve that. And, you know, I think it's not which one is perhaps more secure. I'm not sure there's going to be much in it. I think the focus really is how can we ensure that whether we're still relying on SOAP, but lots of organizations don't or are very much moving away from that, REST or indeed a GraphQL how can we ensure that regardless of which ones of those we're using we can be secure by design? We can detect these security vulnerabilities as early as possible as part of our DevOps methodology to shift that security left. And some organizations, it really depends on their maturity level, whether they're immature with organization to talk about let's start right in order to shift left. I think organizations have an issue where perhaps they're striving for perfection at the cost of actually making any progress. And I think that's where organizations need to say, okay, how can we start culture? The people and the processes are very, very important. But then comes okay what tooling can assist us to drive that forward? Or what tooling can enable us to achieve some form of progress whilst we then mature, and then gives us the ability to scale as we mature those processes as well.

Joe [00:11:18] Absolutely. Yeah, I know once again, with web development there's a high reliance now on open-source components that basically developers are just taking all these open source components, stringing them together, and coming up with a solution. I don't know if that's the same with API. And if so, I know there's a rise with security breaches of open source packages. But are there any known API type libraries that people may be using that could be had exploit in them? So they may be focusing on their own code as being secure, but now looking at third-party components they may be using, they're actually causing security issues?

Oli [00:11:50] Yeah, absolutely. And, you know, I think that will always be the case, whether it's APIs or indeed web app. You know, speaking with organizations absolutely a good you know, SCA is going to be very important, particularly as you mentioned that it's very easy to bundle up a few of those and get a functioning application. And you need to ensure that those vulnerabilities are patched, as it were. But you will always need the, you know, a full Pen Test, let's say. I think anyone that says you're going to fully automate that process is going to be wrong. You always need to do a sort of final check. And, you know, we have many clients that will say, you know, well, we've implemented Sneak, for example. And actually, now we want to then over the top of that add a very, very comprehensive DAST. And, you know, where is it in the process that you're going to be doing that? So whether I'm talking to the security architect that wants to deliver automation so that they're not relying on either in-house VA or very expensive, outsourced third party VA, that's going to be done periodically. They want to be able to automate that process either for their own teams, either putting a tool in the hands of their QA to deliver security assurance when they're delivering their QA or the functional testing, ultimately with a view to being able to put a security tool into the hands of those that are not security experts such as your developers. So as I mentioned previously in the opening paragraph, to be able to run a security test on every build, every commit or every pull request that they are making.

Joe [00:13:26] Great and you know, you mentioned before that there still seems to be a high reliance on manual testing. And I'm not sure why that would be with the API testing, since it seems they'd be a lot more automatable because you are not dealing with a UI front end. But a lot of pushback I get with folks as well, our customers are using our UI, so we test to the UI. Then we have all the APIs covered because the APIs are being called from the front end. So there's extra overhead. So are there any pushback to testing APIs? It's an education issue? Why do you think people are still relying on manual testing for these types of activities, testing activities?

Oli [00:14:01] Well, I think the main problem that organizations face is actually with the inability of their automated tools are truly being able to capture or detect the attack surface. So it's really a coverage issue, typically relying on crawling. You know, that's just it's just not going to cut it I'm afraid to put it quite bluntly. You know, APIs are a different beast than just having a web application. And I think that's where the sort of focus, I suppose, would need to be. The majority of processing and perhaps the logic is that the client sending API requests, getting raw data back, and then the processing is carried out on the client-side. And it's this that's led to sort of fundamental changes to the security threat model. It's now the client that has a lot of the access to the data, the raw data and the back end is relying on the client-side. And if the hackers can, then if they're able to intercept that data and to exploit and use the API directly, you know, that's where they can really get change a lot of that data. And that's obviously been made worse by the decomposition of applications into microservices. And that's what's led in this attack surface growing exponentially. So, you know, when testing an API, for example, the crawl is not going to work. Actually, you need different discovery methods. So whether that's utilizing or leveraging a half file, a recorded interaction with the target application, if you wanted to integrate it into your development pipelines, does your tooling enable you to upload a Swagger or an open API specification? Organizations use Postman, for example. Can the tooling leverage and consume your Postman collections? And typically what you find is the answer to that is no. So you have to have that reliance on manual testing. And, you know, I like to differentiate manual testing and the testing carried out by your developers as part of their pipeline, because the security personnel can also perhaps either with some tooling, with different proxies, test an API in a semi-automated way. But the notion of DevOps is really trying to shift that security testing left to put these capabilities into the hands of your developers, your engineering team, or your QA if you're not at that stage, to be able to run a comprehensive security test on each of those build on a specific entry point with a set range of parameters to test for security vulnerabilities. And I think that needs to be the focus for organizations as opposed to constantly shifting at…shifting it right and saying it's not our responsibility. And, you know, my mother always said, you know, carpenter never blames his tools. But I think, you know, in the DevOps world, you know, once you've got the culture and once you've got the processes in place, the tooling does play a fundamental role and ultimately a lot of blame in inverted commas, if I can say that can be placed on the tooling. You know, they're slow. They're perhaps complicated to use. One major issue with perhaps not so much with API testing, but certainly with web applications security testing is the inaccuracy of the results you get. The false positives that actually lead to even more bottlenecks downstream. You know, how can you effectively deal with this? And you need the DAST tool that perhaps gives you that single pane of glass where you can test your web applications, your single-page applications, your APIs with one specific tool that's going to give you the detection and more importantly, the coverage that you need.

Joe [00:17:40] Absolutely. And I think you have a webinar on this and you had like four main areas you take. I think a tool should help you. And one of them was the single pane of glass type of activity. You also had three other ones like coverage analysis, payload into detection and speed. Now, it sounds like almost a lot of these DAST has probably been in the market for a long time. So what you're saying is API is a kind of newer and even if you look at OWASP, but I think there's a different OWASP Top 10 for APIs compared to web. So maybe they don't have some of these newer areas to be focusing on. So for a newer tool like what makes your tool different in these areas compared to maybe another solution that someone may be using? Is your solution complementary to what they already have invested in?

Oli [00:18:21] Yeah, that's a good question. So, yes OWASP will have the OWASP  Top 10 for APIS. And, you know, a lot of the list is very similar to your standard OWASP Top 10, perhaps in a different order. But I think that you know what you need to ensure DevOps I mentioned before that you need the culture and the processes. You know, it's about collaboration between security teams, breaking down those silos, creating champions in each of them, you know, to spearhead that change, to sell, but also manage security testing automation, you know, with your developers, to be a sort of go-between that speaks to different languages across the pipeline. But then I think it's being able to empower these different teams in order so that you can actually scale your testing, but also to ensure that you're doing it at speed. DevOps is all about automation and speed. And the last thing you want to do is, is add know delays into that processes. So with shifting left, it means ultimately that putting security testing into the hands of your developers. You know, they need to be your first line of defense, as it were. So to enable them to detect, to prioritize, and treat the security defects like they do with their functional defects to fix them early, that's going to be the cheapest time possible. It's going to mean that you're not then going to be accumulating that technical debt that I mentioned already. But in order to do that, you need a tool the developer can actually use. Not a tool that's built for security professionals, but it's going to be probably disabled by the developers and join shelf wear. The amount of times I speak to clients where, you know, they have licenses with other tech that's just not being utilized. They're complicated to use and they're complicated to configure. So a tool that can test both web apps and APIs, that single pane of glass that you mentioned, of course, is going to get additional buy-in from the teams. But I think that you hit the nail on the head in terms of attacks and payloads. So developers aren't expected to understand okay, well I'm testing my web app now and then I'm going to be testing my API by leveraging my schema. Which tests should I be utilizing here? And that's where our technology really differentiates itself. So we have very, very smart engine. You know, we have reinforcement learning, machine learning, some of the…some organizations and people love to hear that. Others don't. But actually, what it enables us to do is to automate the process of maximizing or optimizing scan speeds to complement DevOps as much as you can. So our technology has the ability of a smart scanning functionality where we can look at the target and understand which attacks and which payloads are going to be more relevant, which parameters are going to be static parameters that perhaps if we didn't understand what they were, we would then carry out these tests against parameters that perhaps wouldn't have an effect on the target application. So we have the ability to remove the noise, as it were, and carrying out very specific targeted tests against parts of the target that are going to be relevant. And all that's really about maximizing or optimizing the scan speed without relying on the developers to know what they're doing. So it's really trying to simplify AppSec testing so that you can put the trust into the developers and integrated across their pipeline so they have that security feedback loop to raise the ticket and GitHub or Jira, but also ensuring that the CISO or the security team have full visibility of where are the vulnerabilities being made, with which team on which projects then can actually aid in understanding where do we need to apply some security awareness training? Which teams are making which common mistake? And actually, it's nipping it in the bud. It's enabling your developers to realize, okay, I've made a mistake. I'm being caught up on it or brought up on it. Now as I'm doing it not three months down the line, particularly when it comes down to API testing, which is very much periodic quarterly, biannually, or indeed annually. And it's the accuracy of the tools that I think is also plays a very, very big part in that. So if we're looking at DevOps or we're looking at CI/CD, that's completely counter-intuitive or it's the antithesis of being able to achieve that methodology when you're getting inaccurate results that need a manual validation of the vulnerabilities. So one key fundamental differentiator of NeuraLegion technology is the ability to, in a fully automated way, validate every vulnerability that we're able to exploit, which means that there's no need for the triaging of false positives. There's not a heavy drain on the security team to manually validate those. So every vulnerability that's detected, developers can knowing full well that it's been exploited with a proof of concept that it's a true positive to fix it immediately. And I think those three or four key features of what organizations need to put together a robust security program, but one that's actually going to keep up with or maintain the speed of DevOps and not cause that drag.

Joe [00:23:53] 100 percent agree. I come from a functional testing background. So the same issues that you're talking about here are the same issues with the functional testing. Well, when people are checking code, developers are checking code, they have a unit test, a lot of false positives, they start turning flags off so that it doesn't run. Or if it doesn't meet a certain threshold, it gets kickbacks so they start doing weird things and work around that. And also it takes up a lot of time and so they get mad when you add more and more of these scanners. So I guess the biggest thing I want to dive into here is the machine learning APIs. That seems to be a buzzword in functional testing. So can you just explain a little bit more? Like it finds a defect, developer checks in code, it finds an exploit in one of their APIs. And you're saying it's a true issue because you have some technology that's able to determine it's a real issue, not a false positive. Then what happens? Does the developer have enough information to know from the tool that, okay, not only do I know it's a real issue, but now I know how to fix it? Or do they still need to work with the security team to say, hey, what do I do here? I have no idea what this is doing here, what this means.

Oli [00:24:54] Yeah, and you raise a very, very good point there. So, you know, everyone talks about web app scanners. And I think that really gives you the best understanding of the difference between all the DAST tools and our DAST. We don't scan. We communicate with…we interact with the target application. So we go through a process of browser automation. So we are communicating with the target application as if a human would. And that enables us to go through an automatic validation process. And a nice example of that would be with a reflective XSS, for example, where we will almost render the page and look for that reflection showcase with an automated screenshot, the reflection as a proof of concept. And of course, every vulnerability that's detected is provided with all the response, the request with a Diff like view, so developers can really understand where the issue is, all the response and the body is all provided. We can give you details of remediation advice, links to additional resources, whether that's with our own knowledge base on how to remediate so that developers can understand where you've gone wrong so that they can read up about it. They can then go away and try and remediate and fix. And then the technology also provides the developer with the ability to execute that same attack, execute that same payload post-remediation to see if it's been remediated successfully without the need to then suddenly go and recall a new scan. So absolutely, the key focus really is enabling developers to be able to, you know, mark their own homework, but also when they get back a vulnerability that it's been verified and it's been validated as a true positive. Here's the proof of concept, and this is what you need to do to remediate. And by the way, probably if you're doing it now, not three months later, when perhaps that developers moved onto a new project working on a different microservices or working at a different company, they're learning on the go. And I think that's where organizations need to go. It's really about ensuring that your developers are not going to be routinely making the same mistakes again and again and again. And the best way of educating them is by putting them up on their mistakes as early as possible. But from a business perspective, you know, I think I always mention, you know, for the number crunchers out there, it's going to be the cheapest, most efficient time for you to remediate those fixes. Whilst from a security perspective, it's going to ensure that you are intrinsically secure by design so that less is being pushed out into production that's going to be vulnerable. And you're going to have some automation orchestration for your CI/CD pipelines. One thing you can't categorically have is builds failing for no reason because that's when you suddenly need to start monetizing your risk. Okay, well, we've done a test. We know where our problems are. We've got 250 vulnerabilities here. We know that probably 80 percent of them are going to be false positives. What do we do? You know the needs of the business say that we need to release. You know, the security guards are saying, come on, don't do that. You know, you've got an Agile entrepreneurial engineering team that's probably headed up by a CEO, you know, for a new tech startup that says, come on, guys, we need to deliver these new features to our clients. But in security it's always going to be that that friction in between. So we've removed the false positives. We've made it very, very easy for developers to use. We've removed the complicated configuration of not only the scans but also configuring the scans so that the results are going to be relevant to minimize it. Most also talk about minimizing false positives. We remove them, remove the noise, remove the alert, fatigue, or exhaustion so that you can put security testing into the hands of your engineering team.

Joe [00:28:46] Right. And, you know, I don't know if this is a cultural thing or if you see this as well. With functional testing, a lot of times developers like, oh, we have a suite of automation tests to be run. So I don't really need to test my code. So if you have a solution like you're explaining here, how much testing does a developer still need to do? Or can someone have overconfidence in the tooling to think that we're covered? We have a scanner, uses AI machine learning, so we're good to go. Don't really need to focus on security as much because it'll be caught by the tooling.

Oli [00:29:18] Oh, no, I think that no developer wants to put security bugs or even functional bugs into their products and, you know, ultimately they're still going to be picked up on it, whether it's now or whether it's later. Speaking with teams, you know, speaking with our clients you want it done as soon as possible. You want to be brought up on these issues as early as possible. And ultimately, you don't want to then make that same mistake again. No one wants to go back and redo what they've already done. So I think it depends on the team. There's always going to be, you know, perhaps some people, you know, a cultural thing like you say. But I think in one office you might have many different cultures where there may be a heavy reliance on saying, well, you know, I'm not going to worry about that. I'm going to you know, I'll just let the scanner pick it up, and then I'll fix it. But, you know, ultimately, you know, if you're the one that's going to be coming back at the end of the day or the end of the month or the week with multiple different fixes, then it's not going to look good on you and you're the one that's going to be the bottleneck. So, you know, I don't think that's really an issue. I think people want to be…wants to ensure that they're doing the right thing. And in an ideal situation, organizations, regardless of whether that's the development, the engineering team right up through to the CISO to the CEO, just wants to ensure that they're being as efficient as possible, as dynamic and agile as they possibly can without adding more work. So, no, I think most organizations and most teams, in my experience anyway, don't mind the notion of being pulled up on issues or fixes. But by the same token, you know, there are subject to the size of teams. Yes. I won't be naming any names. But, you know, there are organizations that say, you know, sometimes what you don't know, you don't know. And suddenly, you know, and what are you going to do about it is perhaps what you're alluding to. Joe, tell me if I'm wrong. But I think now organizations are realizing that I don't really think you can take that stance anymore. There's so much at stake. You know, recently I think it was a psychology department in Finland that got breached. And there's a hacker, you know, exploiting people that had psychological assessments from there, from their early teens right up through to adulthood that are now being exploited for money. Otherwise, they release their records. That's just one recent example of the last few days. So security is really, really so important. There's so much data now that the attackers can exploit and leverage that people are waking up and taking security testing seriously and they're realizing that API is now really is the key to that data. And you need to make sure that they are intrinsically secure, not relying on manual testing that's going to be periodic unless you've got very, very deep pockets that you're able to run those manual tests on a daily basis. But whether, you know, organizations have 50, 100, a thousand different API, it's really a scalability issue.

Joe [00:32:17] So you mentioned communication a lot and culture. You are using AI machine learning. What other insights can be bubbled up? I mean, over time, is it smart enough to say, you know, our team B that checks in code always have sequel injection issues, so maybe we need to invest in some more education here? I guess what other insights have been bubbled up to the teams security-wise with the tool over time?

Oli [00:32:37] Yeah, well, I think that's going to be the main one. There's a very big focus now on code. coding education, security awareness. And I think trying to understand not only your internal teams but a lot of organizations also outsource development to third parties. And I think you also need to be in a position to be able to validate what they're doing, you know, assigning them at a specific project or a group like you can with our technology to say, okay, well, are you delivering us secure software? Are you delivering us secure APIs? And if you're not, you go back and make the fix and do it now before you deliver it to us. You know, that's from the external players. But of course, regardless of whether you're a technology company or whether you're a supermarket or Walmart, you want to ensure that anything that you're doing is going to be efficient and that your team has the requisite training to ensure that they're delivering maximum effect. So the CISO dashboard that we call it on our technology enables you to have that full visibility. So not only is it okay, where's our risk immediately? I want a snapshot now of our top 20 assets. I want to be able to run a DAST tool now. Let's come back to the results. I want to know where I am now. So we remove the false positive, which means that actually, you know, where your risks are because we don't need to wait seven, five, seven or 10 days for the reports to be manually filtered or triaged. And that way you can then understand where your risk it's. And then if you can then go a little bit granular, you can say, okay, well, we've got this risk here. And actually, we've noticed that, yeah, the sequel injection issue seems to be quite prominent in Team E and Team A are working on this asset, this is our Holy Grail. This is the one that, you know, as a trading platform this is the one where all of our clients are transacting. Let's give them specific training specifically in dealing with SQL rejection. What are they doing wrong? Let's try to nip it in the bud. And we're seeing a lot of that. And that really is the main focus for the organization. They want to ensure that their teams are working, not only effectively maintaining the Agile DevOps methodology and the speed that comes with that but also to be sure that they can always improve. And I think us NeuraLegion as an organization, we're very, very you know, everything we do, we look back and we learn from it. From positives, how can we be even better? But also we always take any negatives so we can improve to ensure that we're delivering better technology and a better service to our clients. I think any serious business would want to do the same.

Joe [00:35:16] I mean, I really love about that also, is you could test everything and everything all the time, but it almost sounds like this helps you focus not on everything, but you could, but mostly on risk. And so people, in my experience, are really bad at risk assessment. So it almost sounds like not only are you testing more, but you also can focus on the areas of risk within your business, like this module is critical to all our customers. So we here's the risk that has been bubbled up to us as an issue. So this is where we want to focus a lot of our testing efforts going forward. Probably.

Oli [00:35:47] Yeah, absolutely. And I think, you know, I can give you sort of two examples. So if you're looking at a younger, let's say, a startup and, you know, they want to integrate security testing as part of their pipeline. Great. Start now, try and shift security testing left. And actually what you find is that, you know, you haven't got that accumulating debt of remediating fixes and you can be, you know, sort of generate that more secure by design approach. When you're looking at a large enterprise clients, let's say one of the big multinational banks where the amount of technical debt that they have has been accumulating perhaps for years, and they know where their risk is, but they just haven't got the resource to be able to remediate it. Their development and engineering teams are going like the clappers and it's spiraling out of control. So you need to be able to take a snapshot of, okay, where are we, where are we now? And it's the issue of one, the detection of the coverage. If you're going to have coverage issues, you need to then always go back and rely on the manual testing. So, you know, API is always going to have that issue because you don't have the automated tools that are going to provide you with the automation to test and the coverage. But you also then have the issue of, okay, well, great, we've done all these tests, but we've got a massive pile of vulnerabilities here that we need to fix. So now let's start segmenting them. Let's look at either a CVSS scoring or let's look at the issue type or the severity. And let's now go and fragment that into let's take our thousand assets. And now let's look at our top 50 for the moment. And suddenly you find yourself having to forget about a vast majority of your assets to focus on the ones that are most paramount to your organization. So where enterprise organizations are trying to go and trying to move to is with automation and ultimately, you know, try and, you know, try and remove the backlog that they have. But from a CISO's perspective, you know, to be able to take a snapshot in time to say, where are we at, where's my risk? I need to feedback to the board to say this is…these are the issues. These are the resources that I need. We've now don't need to rely heavily on our security team. And I don't know if there's a really interesting statistic. The number of developers to security, the ratio is fifty to one, respectively. So, you know, it's always going to be an uphill struggle when you've got that ratio. If you imagine the number of iterations, a number of builds are really, really working towards business transformation and generating new products and new features. Security just can't keep up. And that's why automation is really, really so important. And you have to have that automation in order to understand where your risk is. You know, GRC is so important to organizations, but it's very, very difficult to understand where you are unless you've got that accuracy and you know the scalability to be able to test ideally as early as possible. And when our co-founders started with NeuraLegion, that was really one of the main focus, you know, coming from a CTO and a CISO perspective and they understand the needs of one ensuring that AppSec can keep up with the pace of development. But then from a CISO's perspective with his other hat understanding that I need to know where I…where I'm at. I need to know where my risks are. And at the moment, I'm just looking at so much noise and it's very, very difficult for me to prioritize the remediation, to try and prioritize the risks that I want to try and remediate now and then start clearing up the backlog.

Joe [00:39:32] So I know I mentioned earlier that you actually have a really great webinar. I'll have it in the show notes kind of around this topic as well. And I think you did a few polls around people's maturity around API testing. Just curious to know if you could share some of those results with us.

Oli [00:39:47] Yeah, sure. So, yeah, in the webinar, rest assured, DevSecOps for API, I ran a poll that asked the attendees how they would describe their API Security testing processes. And I actually also put that on LinkedIn, put that on Twitter as well. And the results were surprising on the one hand, but really, really not surprising on the other. So an overwhelming 80 percent or thereabouts I think 79 percent of respondents carried out their API Security testing manually, 50 percent of which I think was internal, and the other was done by a third party. It's not surprising, given that historically the available tooling is very ineffective at best at automating the process purely from a detection, a coverage perspective, for example. But with almost 50 percent outsourcing their manual testing to a third party, you know, that's a very expensive problem that compounds these issues and I think even further. Speaking with one client recently, more of their or one of their main problems was actually how long it took to simply book the testing in with a supplier, let alone having to wait for the test and the reports of their annual scan. So not only is it manual, but it's so periodic. And there how many iterations were there, you know, prior to that? You know, some organizations are lucky to have internal testing teams. But as I've alluded to before, with the ratio of security to developers, 50 to one, it's just not scalable, especially with the sheer number of iterations that we've mentioned that compounds the issues. And very few companies, I think, have the requisite maturity or the experience in their security teams to develop their own, you know, convoluted automation with custom scripts, for example. You know, that's going completely against the notion of DevOps. So I think there were some on the ridge of eight percent, eight, nine percent of those that I'm being totally quite cynical about this, but those that claim to have it fully automated. But I think in my experience, those conversations I've had with they said, yes, we've got our API Security testing automated. It's very much later in the process. And, you know, you don't get that immediate feedback to developers that you really, really need to achieve that shift. The methodology that comes hand-in-hand with DevOps.


Joe [00:42:06] Wow. Awesome, awesome results. Okay Oli before we go, is the one piece of actionable advice you can give to someone to help them with their API DevOps Security testing efforts. And what's the best way to find or contact you or learn more about your solution?

Oli [00:42:19] I think that first and foremost, I think you need to have buy-in. You need to have that culture. You need to have someone from, you know, security sat with your developers or someone for your development team that is going to be pro security. And then I think once you have that, then the rest should really follow. But I mentioned before that in order to really achieve that, in order to maintain the agile, entrepreneurial spirit that a lot of organizations and engineering teams have, you need to have automation of your security testing. And keeping this relevant to API, you need a tool that has the ability to to test whether multiple different API methodologies or architectures sorry. So if you're using REST or SOAP or maybe more or if you want to perhaps shift to using GraphQL in the future, make sure that your tooling has the ability to do that. You need to find a tool that provides you with multiple different discovery methods of the attack surface. So whether that's generating half files in an automated way right from the developer's dashboard with a CLI tool that consume or leverage, whether it's API open API, or Swagger files, for example, are you using your Postman collections? Are you using other QA automation tools like Selenium or Cypress.io? Why carry out more work? Why don't you leverage the work that you're doing already and make your security testing tool work with you to get your compliance test with OWASP API Top 10, for example? If your appetite's more, if you're more hungry for more sophisticated testing, fuzz testing your APIs is something that organizations are really, really looking at at the moment and we've seen a big increase in that. But really importantly, it's about seamlessly integrating it across your pipelines via either a REST API, I mentioned with a CLI tool how can it integrate into other common CI/CD tools, whether it's Circleci or Jenkins, Jira for your ticketing, to ensure that you get that automation enabling you to make it across your security pipelines. And this is really what NeuraLegion technology has been built from the ground up to try and achieve. Where a security testing tool that has been built with the dev first approach, we enable organizations to put security testing into the hands of the developers, in the hands of your QA governed by and managed by your security team and you're CISO to get full visibility of where your vulnerabilities are, to deliver accurate, actionable results to developers, to remediate as early as possible, to be secure by design. And for any additional information, please feel free to reach out to me. Joe, I'm sure you'll put my contact details at the bottom. We have a very, very extensive blog and resource page. Please read up about us. And obviously, if you have any questions, we're here to help.


Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

Copilot for Testers, GPT-4 Security Testing and More TGNS117

Posted on 04/22/2024

About This Episode: Have you seen the new tool being called the  coPilot ...

Jon Robinson Testguild Automation Feature Guest

The Four Phases of Automation Testing Mastery with Jon Robinson

Posted on 04/21/2024

About This Episode: Welcome to the TestGuild Automation Podcast! I'm your host, Joe ...

Bas Dijkstra Testguild Automation Feature

Expert Take on Playwright, and API Testing with Bas Dijkstra

Posted on 04/14/2024

About This Episode: In today's episode, we are excited to feature the incredible ...