Security Testing

Testers are You Ready for Adversarial AI?

By Test Guild
  • Share:
Join the Guild for FREE

What is Shadow AI

I’ve been hearing a lot about Shadow AI recently. It was a new term to me, so I was lucky to have the chance to speak with Dr. Arash Rahnama, Head of Applied AI at Modzy, more about it.

Dr. Arash describes Shadow AI as the distance between what the data scientist develops in the lab, and the actual product the company may need.

There's always a difference between what you test in your lab with your own data sets and scaling, and a product that you can actually use in larger-scale datasets and applications.

Like any other cybersecurity examples, these AI models are also vulnerable to attacks getting hacked or being misused.

How does this happen with Pre Trained Models in the training phase?

When you develop an AI model, it's usually designed to perform a specific task. You train AI on a training data set, then based on what it's coming across during training, that model learns to make predictions.

You then take that out and develop the final product, which is then run on your input data set during test time inference.

After you deploy your AI model, it is sometimes vulnerable to noise, but you can add engineered noise to the input and completely fool the predictions that the model was trained to make.

This is the field of adversarial AI.

Adversarial AI Attack

Is it possible for a bad actor to look at an AI model you've trained, developed, and deployed and figure out how to change those inputs?

Arash explained that the model doesn’t perform as it's designed to during inference or deployment without the user noticing.

That's the tricky part.

These changes are so small and easy to make (and so hard to detect) that it becomes quite problematic for many different applications because you think your AI model is working, but it really isn’t…and it may take you some time to notice that.

So, you are relying on these predictions. You are listening to an AI model, but what you’re looking at in terms of the information you're getting back is if it’s compromised.

This type of attack is referred to as data poisoning.

AI data poisoning

What is AI Data Poisoning?

Gartner said that in  2022, 30% of all AI cyber-attacks will leverage training data poisoning.

So, what is AI data poisoning?

Data poisoning is one of the subfields of adversarial machine learning.

What we mean by data poisoning is when at the very beginning, when a data scientist begins to train his or her models on the training data sets, those training data sets are compromised by the adversary.

Data poisoning happens in the lab when you are training models and training data sets that you trust, but may in fact have been compromised or poisoned by the adversary.

And what happens there is that you train your final model on that poisoned data set so everything seems to be normal.

You get the performance indices and predictions you expect on the test data set.

As a data scientist, you wouldn’t notice that something may be wrong with your data set.

But once you deploy your package, the model is compromised in the sense that it may underperform and specific cases that the adversary was interested in when they presented your data set.

So it’s still performing well 80% of the time.

But for that 20% and those specific cases, the model underperforms and again, it becomes tough to detect because it's so hidden under the processes that model is producing.

Why should testers worry about this?

As a tester, you might wonder why you should care about this?

Well, there are many adversarial examples that you should be aware of.

For example autonomous cars.

When it comes to autonomous cars, it's possible to hack these image classification models, placed on many different autonomous vehicles or driverless cars where they detect traffic signs, for example.

If I'm a bad actor and I want to cause an accident, it would be effortless to hack that image classification model, which is obviously my objective. I’m looking to compromise the security of the driver.

It's possible to hack an image classification model on a driverless car so that it's not detecting the speed limits or a stop sign and doesn’t stop at an intersection, leading to an accident.

So, autonomous cars are anything but autonomous; you want to ensure that the system you're giving the task to performs as expected. Also, privacy and security issues may arise if that system is compromised with adversarial backdoors.







Don't miss Dr Arash's Secure Guild Session on Attacking AI with Adversarial Inputs and How to Defend against It!

Hidden Content

Defense is another example.

If you’re deploying AI models for mission-critical objectives or goals or mission-critical environments, you want to be sure the AI system is doing what it's supposed to do.

Some significant examples would be with the Internet of Things (IoT), automated cities or automated traffic control, all those things, or air traffic.

As we are using AI deep learning systems more and more and there are specific applications, you want to make sure that your system is secure because now they're replacing some of them of the mundane tasks that humans use to perform.

You're relying on a system to automate these processes, and you want to be sure they're performing as expected.

Adversarial AI Attacks

Ways to combat data poisoning

Just as the adversarial, offensive side of AI is growing, its defensive side is also increasing.

Various defensive solutions are being created to help protect your data, your AI model, or your data science processes.

For example, Dr. Arash and his team at Modzy have created some AI models that lead to more robust systems.

He presented this work at the CVPR computer vision event.

The main idea behind their work is that when you look at automation and control theory, it has been developing to be robust against external noise and disturbance over the last hundred years.

If you're designing factory robots, for example, you define specific goals for that robot, but you also create it in a very stable manner.

If something unpredictable happens, the robot can deal with that.

They looked at AI systems in the same manner.

They said, “Okay…is the AI system itself in a box?” It's a black box. It's a nonlinear system.
Once you deploy that AI model, it has a specific objective.

But at the same time, it's dealing with the environment, external noise, and disturbance, so those adversarial noises and attacks can be subsets of the external noise.

You have to develop your model so it's robust against anything unpredictable that may happen in order to fulfill its objective and the specific tasks that it’s been assigned.

They brought the science from automation, how stability and robustness are defined and applied that to AI systems.

And they developed this new way of training based on back-propagation, which leads to the same robustness indices that are common in control theory.

Applying these AI systems and AI models and artificial intelligence we have shown that we can outperform many other solutions in this area because we can develop robust neural networks to defend against adversarial attacks

What to keep in mind when developing your AI Models

Dr. Arash’s best piece of advice is to make sure that the way you're adversarial training your models is not the traditional way of doing model training—which is outdated at this point.

It's good to consider performance, but not overlook bias, ethical issues, or adversarial robustness.

Not only do you want to consider prediction accuracy, but you should look at how you can explain your models’ outcomes, and how you can use that to explain and deal with the biases innate in your process.

As a tester, you should also ensure your model is making correct predictions on clean data inputs and adversarial input.

To summarize, model development in the field of AI and data science is now about:

  • Performance
  • Robustness
  • Lack of bias

These three objectives should be first in mind when you’re developing your AI models.







Don't miss Dr Arash's Secure Guild Session on Attacking AI with Adversarial Inputs and How to Defend against It!

Hidden Content
{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

How to Level-Up Your Testing Skills with Security

Posted on 10/05/2020

Let's talk about security. I know you're probably asking, “Joe, why should I ...

Why is Shift-Left Security Testing Important?

Posted on 12/26/2019

Are you the Achilles heel to your team’s software security efforts? Think about ...

Top Free Security Testing Tools

Posted on 05/14/2019

Security testing is sometimes thought of as being hard to automate or a ...

Automation Guild '25 Online Event - Registration Kickoff Special (Limited Time) - Elevate your E2E testing skills in 2025 >>