SBN

5 AI threats keeping SOC teams up at night

break-glass-ai-soc-risk

The explosion in the use of OpenAI’s ChatGPT and other large language models (LLMs) — along with a range of other artificial intelligence (AI) and machine learning (ML) systems — is ramping up the security cat-and-mouse game.

AI risks are going beyond theoretical and becoming threats in practice, and security operations center (SOC) teams need to start getting prepared. The problem is lag time, said Ali Khan, field CISO for ReversingLabs.

“For the security operations folks, I’m seeing that they’re starting to predict and threat-model a lot of the scenarios that could go wrong [with AI technology].”
Ali Khan

Proactive SOC teams need to start building the threat models and then skill up and build or procure the right tools for the fight, Khan said. One problem: the business climate. Enterprises are looking to cut costs, potentially stalling security budgets for new AI security capabilities.

These challenges could create yet another innovation catch-up cycle for SOC teams if they don’t start getting out in front of the threat posed by AI, Khan said.

Here are five AI threats your security operations team should be planning and budgeting for to stay ahead of the emerging risk associated with AI.

[ Related: App sec and AI: Can this new supply chain risk be contained? | Chris Wilder, TAG Cyber: Modernize your SOC with advanced malware analysis ]

AI-enhanced phishing and social engineering

One of the biggest AI threats on the immediate horizon is the use of LLMs and deep learning to scale up highly targeted phishing attacks and other social engineering ploys. Attackers can utilize deep learning to do more automated reconnaissance of their targets and pair that with LLMs to generate emails, phone calls, and video to make their impersonation attacks more realistic than ever, said Petko Stoyanov, CTO for Forcepoint.

“We are going to see more targeted phishing. Text-based generative AI is being used to create very personalized emails impersonating CEOs and other executives.”
Petko Stoyanov

The potential is hair-raising. If attackers can scrape data from a trove of employee LinkedIn profiles to map out the products, projects, and groups those employees work on and then feed that into an LLM, they could generate massive business email compromise (BEC) scams, sending out extremely convincing emails that look as if they are from the employees’ bosses or CFOs and that include precise details about the projects they’re working on. If an attacker managed to compromise company data and feed that into the LLM, that would make the attack all the more authentic-looking.

Stoyanov said SOC teams are going to need more proactive monitoring, which is challenging because traditional threat intelligence is hashes and indicators of compromise.

“When you think of advanced persistent threats, every attack is targeted to you and never reused anywhere else. Now what used to be only targeted at certain banks and certain governments can be replicated to smaller businesses because of generative AI. That’s scary.”
Petko Stoyanov

2. Generative AI-based malware

The other AI threat that’s coming fast is generative AI-based malware. Last month at RSA Conference 2023, Stephen Sims of SANS Institute demonstrated how easy it was to convince ChatGPT to code ransomware for him with a series of convincing prompts—even though that model is trained to reject requests for building malware.

Based on his research, SANS ranks offensive uses of AI such as this as one of the top five dangerous attack types for 2023. Included in that is not only malware generation but also zero-day exploit discovery.

Khan says that ChatGPT and other generative AI models stand to greatly enhance the way attackers write malware. “We think that’s going to really proliferate a lot of new malware that threat actors are going to be able to produce all the more quickly,” he said.

“So, think of traditional SOCs writing YARA rules to defend against and detect against signature or hashes traditionally. But with LLMs, attackers are producing things so fast, that you could almost write code on the fly and remove the detection logic that security operations would be dependent on.”
—Ali Khan

3. AI will unleash new software supply chain attacks

Just as with traditional software supply chain attacks, AI systems are going to be increasingly vulnerable to attacks that target the supply chain of components that feed their functionality. This includes the AI models, the training data — and the code that goes into building out not just the models but the software that uses them.

Chris Anley, chief scientist for NCC Group, said there are a lot of AI risks associated with the software supply chain and third-party code. 

“The models themselves can often contain executable code, which can result in supply chain and build security issues. Distributed training can be a security headache — [and training] data can be manipulated to create backdoors, and the resulting systems themselves can be subject to direct manipulation; adversarial perturbation and misclassifications can cause the system to produce inaccurate and even dangerous results.”
Chris Anley

One of the most-discussed types of risks in the software supply chain is data poisoning. Khan said training data is very often publicly sourced, and when enterprises start blindly tying the output of an AI’s model to make predictions or take actions, compromised data could create very costly consequences.

“LLMs can help produce a certain amount of information that you start to rely on, and then threat actors or insider threats might try to poison the data that you’re reliant on as an organization. You’re going to have to start writing detection rules to see if this LLM matches what you’re actually trying to author for your enterprise.”
—Ali Khan

4. Adversarial AI attacks

Whether it is a supply chain attack, data poisoning, or other attack types such as sponge attacks, evasion attacks, or prompt injection, the broader field of adversarial AI attacks — attacks against AI systems themselves — will be a problem for the SOC.

Andy Patel, a researcher for WithSecure, said SOC teams need to rapidly upskill their team on AI expertise to tackle adversarial AI.

“They do need experts because none of the current solutions for protecting against adversarial attacks are plug and play. It isn’t just something you can go out and buy and stick into your infrastructure, and have it work.”
Andy Patel

Patel said different models do different things, and that enumerates the attack surfaces. “Figuring out what sort of attacks you can perform against them, what sort of attacks adversaries would be interested in performing against them, and those sorts of things, that still requires one to look at each system individually,” he said.

5. Data theft and IP exposure

One of the big concerns with generative AI such as ChatGPT is that it can involve inputting sensitive data to an AI system that’s not owned by the organization. This creates a nightmare tangle of data risk and compliance issues.

One case in point of this in action came by way of Samsung, which last month banned ChatGPT use after employees leaked sensitive data by loading it into the platform. This is just the tip of the iceberg of these kinds of incidents.

Additionally, organizations that are building their own in-house AI systems or working with vendors and partners building AI models collaboratively must worry about a cascading list of new data security issues.

Oftentimes the working environments of data scientists working on AI sends data governance right out the window, said Anley. “We now have large data lakes which have to be accessed by either in-house data scientists, or just someone has to be taking care of your customer data in order to use it effectively in an AI system.”

“[That’s] a degree of access to the customer data that probably didn’t exist before the AI system came along. It’s important to look at those new types of data security problems, because that’s another way that you can have a data breach now.”
—Chris Anley

Recognize the risk and recalibrate

With businesses facing yet another cyclical downcycle, Khan fears SOCs are heading into the AI adoption explosion at exactly the wrong time. With generative AI being embraced by technology giants, the risk is ramping up fast.

“You really need to think of and plan ahead for the next fiscal year what kind of scenarios your organization can be exposed to as a result of this.”
—Ali Khan

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Ericka Chickowski. Read the original post at: https://www.reversinglabs.com/blog/5-ai-threats-keeping-soc-teams-up-at-night

Ericka Chickowski

An award-winning freelance writer, Ericka Chickowski covers information technology and business innovation. Her perspectives on business and technology have appeared in dozens of trade and consumer magazines, including Entrepreneur, Consumers Digest, Channel Insider, CIO Insight, Dark Reading and InformationWeek. She's made it her specialty to explain in plain English how technology trends affect real people.

ericka-chickowski has 86 posts and counting.See all posts by ericka-chickowski