MY TAKE: Notes on how GenAI is shifting tension lines in cybersecurity on the eve of RSAC 2025

By Byron V. Acohido

SAN FRANCISCO — The first rule of reporting is to follow the tension lines—the places where old assumptions no longer quite hold.

I’ve been feeling that tension lately. Just arrived in the City by the Bay. Trekked here with some 40,000-plus cyber security pros and company execs flocking to RSAC 2025 at Moscone Center.

Many of the challenges they face mitigating cyber risks haven’t fundamentally changed, just intensified, over the past two decades I’ve been coming to RSAC. But the arrival of LLMs and Gen AI has tilted the landscape in a new, disorienting way.

Yes, the bad actors have been quick to leverage GenAI to scale up their tried-and-true attacks. The good news is that the good guys are doing so, as well. “Incrementally, and mostly behind the scenes, language-activated agentic AI is starting to reshape network protections.”

Calibrating LLMs

In recent weeks, I’ve sat down with a cross-section of innovators—each moving methodically to calibrate LLMs and GenAI to function as a force multiplier for defense.

Brian Dye, CEO of Corelight, a specialist in open-source-based network evidence solutions, told me how the field is being split: smaller security teams scrambling to adopt vendor-curated AI while large enterprises spin up their own tailored LLMs.

DiLullo

John DiLullo, CEO of Deepwatch, a managed detection and response firm focused on high-fidelity security operations, has come to an unexpected discovery: LLMs, carefully cordoned and human-vetted, are already outperforming junior analysts at producing incident reports—more consistent, more accurate, less error-prone.

Jamison Utter, security evangelist at A10 Networks, a supplier of network performance and DDoS defense technologies, offers another lens: adversaries are racing ahead, using AI to craft malware and orchestrate attacks at speeds no human scripter could match. The defenders, he notes, must become equally adaptive—learning not just to wield AI, but to think in its native tempo.

There’s a pattern here.

Cybersecurity solution providers are starting to discover, each in their own corner of the battlefield, that mastery now requires a new kind of intuition:

•When to trust the machine’s first draft.

• When to double-check its cheerful approximations.

•When to discard fluency in favor of friction.

Getting to know my machine

It’s not unlike what I’ve found using ChatGPT-4o as a force multiplier for my own beat reporting.

At first, the tool felt like an accelerant—a way to draft faster, correlate more, test ideas with lightning speed. But over time, I’ve learned that speed alone isn’t the point. What matters is knowing when to lean on the machine—and when to lean away.

The cybersecurity innovators I’ve spoken with, thus far, are internalizing a similar lesson.

Dye

Dye’s team sees AI as a triage engine—brilliant at wading through common attack paths, but unreliable on the crooked trails where nuance matters. “Help me do more with less is one of the cybersecurity industry’s most durable problems,” Dye observes. “So, ‘Help me understand what this alert means in English’ can actually be incredibly valuable, and that’s actually something that AI models do super well.”

DiLullo’s analysts now trust AI to assemble the bones of a report—but know to inspect each joint before sending it out the door. In cybersecurity, DiLullo noted, making educated inferences is essential — and LLMs excel at scaling that process, efficiently surfacing insights in plain English where humans might otherwise struggle

Utter’s colleagues have begin leveraging AI-derived telemetry—but only after investing serious thought into how the tools should be constrained.

Intentional orchestration

In each case, calibration is the hidden skill. Not just deploying AI, but orchestrating its role with intention. Not ceding judgment, but sharpening it.

Tomorrow, as I walk the floor at RSA and continue these Fireside Chat conversations, I expect to hear more versions of this same evolving art form.

The vendors who will thrive are not those who see AI as a panacea—or a menace. They’re the ones treating it as what it actually is: a powerful, fallible partner. A new compass—helpful, but requiring a steady hand to navigate the magnetic distortions.

This is not the end of human-centered security; it’s the beginning of a new kind of craftsmanship.

And if the early glimpses are any guide, the quiet genius of this next chapter won’t be found in flashy demos or viral headlines.

Prompt engineering is the key

Utter

As A10’s Utter pointed out, it’s a craft that will increasingly depend on prompt engineers—practitioners skilled at shaping AI outputs without surrendering judgment. Those who master the art of asking better questions, not just accepting faster answers, will set the new standard.

It will surface, instead, in the way a well-trained SOC analyst coaxes a hidden thread out of a noisy alert queue.

Or the way a vendor team embeds invisible friction checks into their AI pipeline—not to slow things down, but to make sure the right things get through.

The machine can accelerate the flow, but the human will still shape the course.

Observes Utter: “Prompt engineering, I think, is the key to understanding how to get the most out of AI.”

Where this leads, I’ll keep watch — and keep reporting.

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

April 27th, 2025 | My Take | RSAC | Top Stories