Like Any Other Software, AI Needs Security Built In, CISA Says

The nation’s top cybersecurity agency is reminding developers and organizations alike that AI innovations are not immune to the larger IT security initiatives the government is putting in place.

AI and machine learning might be fueling much of the high-profile innovation happening in IT, with its reach moving ever deeper not only into business and research but into society as well, but at its core, it’s still a technology that needs security built into it, two officials with the Cybersecurity and Infrastructure Agency (CISA) recently wrote in a blog post.

“Discussions of artificial intelligence (AI) often swirl with mysticism regarding how an AI system functions,” Christine Lai, AI security lead, and Jonathan Spring, senior technical advisor, wrote. “The reality is far more simple: AI is a type of software system. And like any software system, AI must be Secure by Design.”

CISA has been pushing its Secure by Design initiative since soon after the Biden Administration took over the White House, pushing for software developers and manufacturers to ensure that security is built into their products throughout the development lifecycle rather than putting the responsibility and cost of security on customers.

Most recently, CISA asked for suggestions from the tech industry for securing open source software, which is being abused by threat actors in supply-chain and other attacks.

Claroty

AI May Be Different, But Security is Still Key

In their post, Lai and Spring noted that the way security is built into AI products may differ from those used in other software and that security and safety practices are still being worked out. In addition, the ways bad actors use and abuse AI software are often specific to AI, such as altering images to cause automobile AI systems to cause cars to change how they behave or to hide objects from software in security cameras.

They also point out that everyone from top executives to lawmakers to academics are wrestling safe and fair AI systems. Still, the fundamentals of securing software need to apply to AI.

“AI is software that does fancy data processing,” they wrote. “It generates predictions, recommendations, or decisions based on statistical reasoning (precisely, this is true of machine learning types of AI). Evidence-based statistical policy making or statistical reasoning is a powerful tool for improving human lives. Evidence-based medicine understands this well. If AI software automates aspects of the human process of science, that makes it very powerful, but it remains software all the same.”

The accelerated innovation around AI – in such areas as generative AI and foundation models – is putting increasing pressure on organizations to adopt AI software, pressure that cascades from the top throughout a company. Everything from AI software design and development to data management and AI system integration needs to include the security practices and policies applied to other software.

Beware the ‘Technical Debt’

AI engineers avoid applying such practices because of the demand to continue to ramp the innovation and developers will likewise take on this “technical debt” as pressure to adopt the software systems grows rather than embrace common security practices, they wrote.

“Since AI is the ‘high interest credit card’ of technical debt, it is particularly dangerous to choose shortcuts rather than Secure by Design,” Lai and Spring wrote.

There are parts of AI like data management that include operational aspects that are different from other types of software and will require different types of security practices, but engineers and others in AI software should start apply existing methods now and adapt as needed.

That includes protecting against untrusted code execution, putting vulnerability identifiers in place, adhering to privacy principles, and issuing software bills of materials (SBOMs).

Other AI Security Efforts are Underway

The push for AI community adoption of Secure by Design is part of larger efforts by the government and private sector to make AI technology secure.

The U.S. Defense Advanced Research Projects Agency (DARPA) earlier this month unveiled the AI Cyber Challenge to encourage cybersecurity and AI specialists to design ways to automatically detect and fix software vulnerabilities and protect critical infrastructure.

In addition, seven high-profile companies – including Google, Microsoft, OpenAI, and Meta – are working with the White House to address risks posed by AI. In addition, Google, Microsoft, OpenAI, and Anthropic in July announced the creation of the Frontier Model Forum, an industry group looking at ways to ensure the safe and responsible development of frontier AI models, described by OpenAI as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.”

“Governments and industry agree that, while AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks,” Google wrote in a policy statement about the group, adding that “further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly.”

Avatar photo

Jeffrey Burt

Jeffrey Burt has been a journalist for more than three decades, writing about technology since 2000. He’s written for a variety of outlets, including eWEEK, The Next Platform, The Register, The New Stack, eSecurity Planet, and Channel Insider.

jeffrey-burt has 385 posts and counting.See all posts by jeffrey-burt