Cybersecurity Insights with Contrast CISO David Lindner | 11/22/24

Cybersecurity Insights with Contrast CISO David Lindner | 11/22/24

Insight #1: CISO-less companies, you’re playing with fire Only 45% of American companies have a chief information security officer (CISO), according to new research. It’s time to ask a hard question: Are ...
dope.security, sensitive data, Chorology, ACE, Baffle, data, Capitol Hill staffers data leak

dope.security Embeds LLM in CASB to Improve Data Security

| | CASB, DLP, LLMs, sensitive data
dope.security this week added a cloud access security broker (CASB) to its portfolio that identifies any externally shared file and leverages a large language model (LLM) to identify sensitive data ...
Security Boulevard
data, companies, privacy, databases, AWS, UnitedHealth ransomware health care UnitedHealth CISO

How the Promise of AI Will Be a Nightmare for Data Privacy

| | AI, Data Privacy, LAMs, LLMs
But as we start delegating LLMs and LAMs the authority to act on our behalf (our personal avatars), we create a true data privacy nightmare ...
Security Boulevard
LLMs, Cybersecurity, AI, security, risk, Google AI LLM vulnerability

How LLMs are Revolutionizing Data Loss Prevention

As data protection laws take hold across the world and the consequences of data loss become more severe, let’s take a closer look at the transformative potential that LLMs bring to the ...
Security Boulevard
LLMs, Cybersecurity, AI, security, risk, Google AI LLM vulnerability

Hallucination Control: Benefits and Risks of Deploying LLMs as Part of Security Processes

LLMs have introduced a greater risk of the unexpected, so, their integration, usage and maintenance protocols should be extensive and closely monitored ...
Security Boulevard
LLMs, Cybersecurity, AI, security, risk, Google AI LLM vulnerability

Google’s Project Naptime Aims for AI-Based Vulnerability Research

Security analysts at Google are developing a framework that they hope will enable large language models (LLMs) to eventually be able to run automated vulnerability research, particularly analyses of malware variants. The ...
Security Boulevard
Proofpoint Normalyze data protection

Leading LLMs Insecure, Highly Vulnerable to Basic Jailbreaks

| | AI, AI Security, jailbreak, LLMs
“All tested LLMs remain highly vulnerable to basic jailbreaks, and some will provide harmful outputs even without dedicated attempts to circumvent their safeguards,” the report noted ...
Security Boulevard
RBAC, secure, Fortinet, SASE, Opal, access privileges, cloud security, GenAI, generative AI cloud compromise LLM

Novel LLMjacking Attacks Target Cloud-Based AI Models

It was probably inevitable. Threat researchers detected bad actors using stolen credentials to target LLMs, with the eventual goal of selling the access to other hackers ...
Security Boulevard