Sunday 22nd September 2019

How to close the AI security gap

Published on August 2nd, 2018

As we hand data and control over to the machines for autonomous driving, even if Artificial Intelligence (AI) is still rudimentary, how can businesses maintain proper security controls, asks Andrew Foxcroft, regional director, Radware?

Apple recently found itself in the shoes of many other organisations when it discovered that a former employee, working in the tech giant’s autonomous vehicle development team, had stolen trade secrets.

The hardware engineer planned to take the data and join an intelligent electric vehicle start-up, but his plot was foiled after internal investigators noticed a spike in his network activity. Cases like this are common – in fact, some 30% of CXOs say that insider threats put their companies’ security and IP at risk.

The act of stealing data shows the competitive nature of being first to launch new technology. In the case of driverless cars, where trials have been blighted by accidents, any data that can make vehicles safer and trusted is gold dust. Protecting autonomous vehicles from cyber-attacks is also a huge task. After all, no one wants their brand-new car to turn into a giant brick, nor do they want their navigation systems to be hacked and direct them off a cliff edge.

Hacking into AI systems is a real threat, and there’s a lot of ground to cover. Securing applications, ensuring third-party software is flawless and securing sensitive data about drivers are just a few of the concerns. The big question is what to do about it?


The key to successful, secure AI is giving it access to good data, but this is also the area where security can become an issue. If AI is connected with data, this data should be encrypted. This limits the damage done by a security breach.

Cyber criminals will have a field day if they can access backed-up or archived data, but if it’s encrypted it will be useless to them.That means both encrypted data at rest, when it is sitting in data stores waiting to be used, and encryption in transit when the AI system accesses the data.

Bot versus bot

Andrew Foxcroft

If hackers are using AI for more advanced security attacks, businesses can fight back using AI for more advanced security protection. Humans will find it impossible to stay on top of evolving threats and vulnerabilities, so they need solutions that will proactively look for emerging threats and vulnerabilities, and provide the correct response to secure the environment.

In fact, 20% of organisations surveyed in Radware’s 2017/18 Global Applications and Network Security Report said they already rely on such solutions and another 28% plan to implement them this year.

Meanwhile, Radware’s 2018 C-Suite Perspectives study found that nearly four out of 10 executives trust automated systems more than humans to protect them against cyberattacks.

Watch for bad data

A final tip for closing AI security holes is watching for malicious data manipulation. Machine learning uses data both for learning and analysis, so one way that cyber criminals can mess with systems is through manipulation of this data to trick AI systems into learning the wrong lessons.

This can be done by giving AI access to data that guides these systems in the wrong direction. Imagine what would happen if autonomous vehicle AI is informed by a large set of incorrect data – there is a lot of space for mischief and exploitation.

The way to close or at least reduce this security gap is by watching for unusual data both manually and with AI, and by carefully monitoring the origin of data fed into the system. Our growing reliance on AI may trigger thoughts of worst-case scenarios, but the truth is that more pedestrian, yet crucial, security issues can be dealt with to mitigate emerging threats.

The good news is that these threats can be tamed with a strong security posture. As AI plays a greater role in our daily lives, so too will it play a greater role in improving cyber defences. That will be music to the ears of any business looking to close their security gap and any firm planning to win the race to be first to launch autonomous cars.

The author of this blog is Andrew Foxcroft, regional director, Radware

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow