
‘Slopsquatting’ and Other New GenAI Cybersecurity Threats
‘Slopsquatting’ and Other New GenAI Cybersecurity Threats
As generative artificial intelligence develops, new terms and emerging threats are grabbing headlines regarding cyber threats to enterprises.
April 27, 2025 •
Adobe Stock/InfiniteFlow
As I was perusing global technology headlines this past week, a new generative AI threat grabbed my attention: “slopsquatting.”
According to a recent CSO magazine article:
“Cybersecurity researchers are warning of a new type of supply chain attack, slopsquatting, induced by a hallucinating generative AI model recommending non-existent dependencies.
“According to research by a team from the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma, package hallucination is a common thing with Large Language Models (LLM)-generated code which threat actors can take advantage of.
“’The reliance of popular programming languages such as Python and JavaScript on centralized package repositories and open-source software, combined with the emergence of code-generating LLMs, has created a new type of threat to the software supply chain: package hallucinations,’ the researchers said in a paper.
“From the analysis of 16 code-generation models, including GPT-4, GPT-3.5, CodeLlama, DeepSeek, and Mistral, researchers observed approximately a fifth of the packages recommended to be fakes.”
The article goes on to show why this is bad news and describe how threat actors can exploit hallucinated names.
This YouTube video further outlines slopsquatting:
OTHER GenAI CYBER THREATS TO CONSIDER
This blog has covered a variety of benefits and cyber challenges posed by GenAI use in enterprises, as well as cyber attacks that bad actors are carrying out using GenAI. Many of those threats were articulated by top security vendors back at the start of the year in this list of “The Top 25 Security Predictions for 2025.” In February, I covered AI disruption with the DeepSeek effect globally, and last year I looked at email scams and phishing threats posed by GenAI.
But in addition to addressing shadow AI, phishing scams and these other GenAI threats, there are several excellent reports that highlight security threats from not securely using LLMs. For example, an article from The Banker states that “LLMs can’t keep a secret”:
“A less obvious problem that financial organisations may be facing as they introduce GenAI-powered tools for internal usage is the issue of oversharing. In enterprise applications, LLMs often undergo additional training on the company’s internal data; even more information can be fed into them as part of daily usage by employees in different parts of the organisation. However, LLM-powered chatbots can be rather indiscriminate in regurgitating and sharing even the most sensitive details.
“’LLMs can’t keep a secret,’ explains Evron, whose start-up is working on solving this very issue. As a result, ‘a quality assurance engineer can be exposed to HR files or a marketing intern can be exposed to next quarter’s sales [projections] in an SEC-regulated company.’
“Evron argues that having access control on a need-to-know basis is the foundation of secure AI. It is, however, not easy to fully achieve due to today’s LLMs not having reliable ways to protect parts of the data they are trained on.”
HOW PROMPT ATTACKS EXPLOIT GenAI AND HOW TO FIGHT BACK
I also really like this whitepaper from Palo Alto Networks called Securing GenAI: A Comprehensive Report on Prompt Attacks: Taxonomy, Risks and Solutions.
The whitepaper comprehensively categorizes attacks that can manipulate AI systems into performing unintended or harmful actions — such as guardrail bypass, information leakage and goal hijacking. In the appendix, it details the success rates for these attacks — certain attacks can be successful as often as 88 percent of the time against certain models, demonstrating the potential for significant risk to enterprises and AI applications.
To address these evolving threats, Palo Alto introduces:
- A comprehensive, impact-focused taxonomy for adversarial prompt attacks
- Mapping for existing techniques
- AI-driven countermeasures
This framework helps organizations understand, categorize and mitigate risks effectively.
As AI security challenges grow, defending AI with AI is critical. This research provides actionable insights for securing AI systems against emerging threats.
The report highlights why this is such an important topic in 2025:
“The urgent need to care about prompt attacks stems from the potentially far-reaching and disruptive consequences they pose. As LLMs and GenAI become deeply integrated into critical operations and decision-making processes, adversaries can exploit subtle vulnerabilities to manipulate model outputs, coerce unauthorized behaviors, or compromise sensitive information.
“In some cases, the GenAI apps might generate responses that disclose personally identifiable information (PII) or reveal internal secrets to attackers, drastically increasing the exposure of confidential data. They might also produce dangerous or vulnerable code snippets that, if implemented, could lead to system breaches, financial losses, or other severe security incidents. Even minor prompt manipulations can have outsized impacts. For example, imagine a healthcare system providing incorrect dosage guidance, a financial model making flawed investment recommendations, or a manufacturing predictive system misjudging supply chain risks.
“Beyond these operational risks, prompt attacks also threaten trust and reliability. If stakeholders cannot rely on the outputs of GenAI systems, organizations risk reputational damage, regulatory noncompliance, and the erosion of user confidence. From an ethical standpoint, output bias in compromised GenAI systems can lead to unfair or skewed decision-making, reinforcing societal inequalities and undermining credibility. These types of bias can affect such areas as hiring processes, financial assessments, and legal judgments, amplifying real-world consequences. Later in this paper, we present real-world attack examples and share protection guidance to illustrate these issues in practice.”
FINAL THOUGHTS
A few weeks ago, I wrote a related article for InfoSecurity Magazine entitled “Rethinking Resilience for the Age of AI-Driven Cybercrime.” The piece covers some practical steps to consider as you build out your GenAI programs. Here is how I started and ended that article (which is certainly relevant to this blog and our need for a sense of urgency):
“AI isn’t just changing the paradigm of cybercrime — it’s creating a new, larger attack surface with no rules. Just as culture eats strategy for breakfast, generative AI is swallowing up automated phishing campaigns, deepfake fraud, malware creation and much more at an unprecedented scale — fueling a new era of cyber-attacks. …
“Over the past decade, there have been numerous wake-up calls on cybersecurity after major data breaches, supply chain failures, critical infrastructure outages and more. But now, cyber pros face a much more profound paradigm shift.
“Just as the global move from horse and buggy to automobiles required new roads, gas stations and many other infrastructure advances to speed travel, the future of cyber resilience requires a new way of thinking about AI-powered cyberattacks—and how we will defend our vital data and critical infrastructures into the 2030s.
“So don’t get stuck just focusing on how to feed your current cyber horses.”
CybersecurityArtificial Intelligence
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.
See More Stories by Dan Lohrmann
*** This is a Security Bloggers Network syndicated blog from Lohrmann on Cybersecurity authored by Lohrmann on Cybersecurity. Read the original post at: https://www.govtech.com/blogs/lohrmann-on-cybersecurity/slopsquatting-and-other-new-genai-cybersecurity-threats