SBN

Leading Through Uncertainty: AI, Risk, and Real Talk from RSAC’s Women in Cyber

At RSAC 2025, early risers were rewarded with strong coffee, realistic opinions, and a panel of cybersecurity leaders who didn’t hold back. Synack hosted its Women in Cyber Breakfast for the fourth year, and this year’s conversation couldn’t have been more timely.

Moderated by best-selling author Nicole Perlroth (This Is How They Tell Me the World Ends)—who just launched her new podcast To Catch a Thief: China’s Rise to Cyber Supremacy—the panel explored what it means to lead in cybersecurity during a time of fast-moving AI adoption, global tensions, and pressure from every angle.

Thanks for reading Adopting Zero Trust! Subscribe for free to receive new posts and support my work.

Joining Nicole were:

On AI and Insider Threats: “It’s not just the phishing…”

Perlroth kicked things off by asking how CISOs are coping with the onslaught of next-gen threats. Melissa Bishop got right to it: AI is supercharging social engineering. “It’s not just better phishing,” she explained. “We’re even seeing candidates use AI to look more qualified on paper, and that opens the door to potential insider threats.”

And yes, that certainly plays into the North Korean IT workers getting placed, but it has broader implications, too.

Bishop also emphasized the importance of data classification. Paraphrasing her words, she said: If AI agents are consuming your information, you’d better know what they’re allowed to touch. We need to think about AI access the way we think about human identity and privilege.

Being asked to do more with less

Several panelists echoed a familiar challenge: more risk, tighter budgets. Nidhi Luthra noted that her team is moving away from “broad controls” and putting more energy into targeted investments that are informed by intel. “It’s more surgical now,” she said. “And we’re being asked to justify it all—with metrics and ROI.”

Deneen DeFiore framed the moment as an opportunity to invest in tech like AI agents, tools that can improve irregular operations, but emphasized it only works if cybersecurity is clearly aligned with the business.

However, it should be noted that no panelists, and really no conversations at RSAC, indicated that to solve these challenges, people should be replaced by AI, agents, or automation. Instead, this technology can help bridge the skill gap for newer talent or help experienced teams move faster. Well, maybe not at RSAC at least.

The Talent Crisis, Personal Liability, and Staying Sane

When Perlroth brought up the elephant in the boardroom, many security leaders felt like they were being asked to function as intelligence agencies, and there were nods all around.

Luthra flagged that it’s not just the stress. She feels like she has figured out how to manage that; however, personal liability is growing as a concern for CISOs. She added that when budgets are cut, training and development are usually the first to go, which makes hiring and retaining talent even harder.

DeFiore shared that everything revolves around safety and risk in aviation, so her team is growing cybersecurity talent through apprenticeships and rotations. “They know what they’re signing up for,” she said. “And we’re building a pipeline instead of only chasing the most experienced.”

Rebekah Wilke pointed out that the need for broader skillsets, rather than narrow domain expertise, is changing how teams operate. “But it does affect how we deliver on outcome-focused strategies,” she cautioned.

AI Governance: AKA the still figuring it out stage

Perlroth turned the conversation back to AI, asking how leaders are setting guardrails. Wilke didn’t sugarcoat it: “AI is moving too fast for governance to keep up. We still don’t know what the real outcomes will be.”

Luthra described internal governance groups that help “slow things down” while the company figures out how and where AI should be integrated, especially in patient-facing services. She also mentioned efforts to hunt for rogue AI usage inside the org to understand what it might be capturing.

Bishop returned to the idea of treating AI agents like privileged identities. “This is a shift,” she said. “You can’t just deploy these tools without thinking about what they’re allowed to access.”

Supply Chain and Shared Risk

Finally, the panel dug into third-party risk, especially in the wake of recent high-profile incidents. Luthra referenced the public response from the CEO of CrowdStrike, praising the transparency and realism: “Humans make mistakes. Resilience is the playbook.”

DeFiore added that third-party risk can never be completely controlled, but understanding shared dependencies, what parts of your business rely on those third parties, is critical. “You’ve got to bake that into your continuity planning.”

The Bottom Line

There was no sugarcoating on this panel, just hard-won insight from people in the trenches. If there was a unifying theme, it was this: Today’s security leaders are under pressure to do more, prove more, and stay ahead of both emerging tech and nation-state threats… all while recruiting a workforce that hasn’t even been fully trained yet.

And somehow, they keep showing up.

Thanks for reading Adopting Zero Trust! Subscribe for free to receive new posts and support my work.

*** This is a Security Bloggers Network syndicated blog from Adopting Zero Trust authored by Elliot Volkman. Read the original post at: https://www.adoptingzerotrust.com/p/leading-through-uncertainty-ai-risk