Securing AI and ML at the Edge

Organizations are increasingly turning to AI and ML to enhance their cybersecurity operations. Having algorithms to do some of the most tedious but necessary tasks has taken a lot of stress off of overworked security teams.

But as AI/ML become more ubiquitous within organizations in many other areas, the technologies themselves are at risk of attack. The Harvard Kennedy Center’s Belfer Center for Science and International Affairs recently released a report warning of “a new type of cybersecurity attack called an ‘artificial intelligence attack.’”

These attacks, the report said, are different from other types of cyberattacks. “AI attacks are enabled by inherent limitations in the underlying AI algorithms that currently cannot be fixed.” They present a bigger attack surface by weaponizing data and physical objects that rely on AI and ML.

AI/ML’s Security Problem

Businesses rely on AI/ML because it is capable of extremely fast processing, though it also has been proven to create massive blind spots, according to Ben Pick, senior application security consultant at nVisium.

“The decision tree algorithms used by AI/ML are based on assumptions and are frequently shown to have severe obstructions and oversights,” Pick said in an email interview.

There is also the complexity of additional use cases and capabilities for AI/ML that don’t necessarily overlap with security. AI/ML developers who are experts in that field are contributing to projects in addition to your normal team, which makes adding the security component to AI/ML more challenging.

These struggles to incorporate security into devices’ AI/ML functions falls perfectly in line with threat actors’ desire to find vulnerabilities through which to make their attacks. Because as much as AI/ML can assist in securing applications, it certainly cannot remove all risks.

“A hacker’s main goal would be to corrupt the inputs to confuse the decision-making algorithms,” said Pick. “This could lead to a piece of duct tape over a speed limit sign—causing an automated vehicle to speed up to an unsafe velocity—or facial recognition incorrectly identifying a person.”

Moving to the Edge

AI/ML is often used to augment security by acting as a first line of defense against threats, but what about when AI/ML  itself is on the front line and is first to be attacked?

Securing AI/ML at the edge could be the solution to mitigate risks to the technology and to the devices using it. Adding security at the edge increases confidence in the results/inferences derived from the models, explained Larry O’Connell, VP of marketing for Sequitur Labs.

“Also, having high security at the edge allows OEMs to use more sensitive/proprietary models by mitigating the risk of theft,” O’Connell said in an email interview.

With a strongly defined “good” baseline, any anomalies or threats can be more easily identified, building and adapting the security and algorithms used to protect devices.

Using the Edge to Secure AI/ML

The sheer volume of devices and inputs at and from the edge requires a vast system of inventory and maintenance—on top of securing each device, Pick pointed out. That means fine-tuning AI/ML will require a full understanding of the environments, as well as a large number of human analysts to adapt the AI training inputs.

Also, securing an embedded device requires specialized expertise, much the same way using AI properly does, McConnell added. Organizations should start projects with security included from the beginning.

“Planning for security means defining your requirements, such as threat models, and quantifying risk,” McConnell said. “Security extends to the manufacturing, provisioning, deployment and update phases of the product life cycle. For AI specifically, the process to securely update models should be included as part of the update process.”

Organizations will need to take on a greater responsibility to keep AI/ML secure on the edge, as it will be a large attack surface to monitor. This control offers a faster response to potential attacks and, as AI becomes a bigger target, faster response and mitigation will be key.

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba