prompt injection attacks

Novel TokenBreak Attack Method Can Bypass LLM Security Features
Researchers with HiddenLayers uncovered a new vulnerability in LLMs called TokenBreak, which could enable an attacker to get around content moderation features in many models simply by adding a few characters to ...
Security Boulevard

GenAI’s New Attack Surface: Why MCP Agents Demand a Rethink in Cybersecurity Strategy
Elad Schulman | | AI identity management, data systems, Enterprise AI security, GenAI attack surface, GenAI cybersecurity, LLMs, MCP, MCP security challenges, prompt injection attacks, tools
Anthropic’s Model Context Protocol (MCP) is a breakthrough standard that allows LLM models to interact with external tools and data systems with unprecedented flexibility ...
Security Boulevard

Infectious Prompt Injection Attacks on Multi-Agent AI Systems
LLMs are becoming very powerful and reliable, and multi-agent systems — multiple LLMs having a major impact tackling complex tasks — are upon us, for better and worse. ...
Security Boulevard