SBN

#RSAC is bustling — and AI + security is huge: #StrongerTogether?

rsac-moscone-richixbw--minda-haas-kuhlmann--cc-by-nc
At RSA Conference 2023, you can’t move for artificial intelligence chatter.
 How will it help us meet the software supply chain security challenge? And how will it help bad actors find vulnerabilities?

San Francisco’s Moscone Center is heaving with people again. It’s like 2020’s super-spreader event never happened. In this week’s Secure Software Blogwatch, we believe the hype.

Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: RSA Cheese.

[ See ReversingLabs @ RSAC for speaking sessions and more | Plus: New Software Supply Chain Security Survey key takeaways ]
 

RSAC AI snacks

What’s the craic? Sam Sabin reports — “Generative AI blitz hits cyber industry’s biggest conference”:

“AI’s impact”
This year’s RSA Conference has become a hot spot for AI security. … Companies big and small are rolling out new products this week that incorporate generative AI.

But so far, most of the products have been pretty simple — with security firms opting to just train their own large language models on their stores of intelligence and attack data. … Ever since OpenAI’s ChatGPT entered the scene last fall, companies have been scrambling to figure out how they, too, can profit. … But until this week, cybersecurity firms have been a bit slower to embed generative AI into their systems compared to other industries.

Generative AI’s impact on cybersecurity is likely to be much bigger than what we’ll see at RSA throughout the week. [But] cybersecurity vendors aren’t exempt from marketing hype cycles when new technology emerges.

What are the broad strokes? Edward Gately adds — “Fighting Bad AI with Good AI”:

“We need AI to fight this”
The theme of RSAC 2023 is stronger together. The massive conference in San Francisco is back up to pre-pandemic attendance levels. Rohit Ghai, RSA’s CEO, discussed new challenges that AI puts on th cybersecurity community: “AI will challenge our identity, our role in this world. … Bad AI will take us for a ride and identity is a sitting duck.”

Cybercriminals will use [AI] to compromise identity and “we need AI to fight this.” We’ll need AI to weed out false-positives and reduce security alert fatigue, he said. … AI in security doesn’t mean a replacement for people, he said. Most of the AI solutions companies are rolling out are “co-pilot” solutions. Many jobs will disappear because of AI, but in cybersecurity “we don’t have enough people as is.”

Zeroing in on software, John Furrier and Dave Vellante chat in — “theCUBE”:

“Completely different ballgame”
So this is of course the premier cybersecurity event on the planet. … And this was the last event in 2020 prior to COVID. … (laughs) It was a super spreader event. … But it’s big. … It feels like it’s back to where it was in 2019.

I think that most of the security challenges that we face in this industry are self-inflicted. … Security as a cost center is not the question anymore — it’s security as a company saver. [And] software supply chain’s a big story. … It’s a shift: … You have this two-tiered supply chain attack challenge. … the 3CX attack, which was a double supply chain attack … meaning you had a download of a piece of software … through the supply chain that triggered a second supply chain.

So modern AI applications are coming. That combined with cloud scale, is going to make an opportunity to re-shift the development and security posture. … And all of it’s underpinned by open source software, and the software supply chain or software bill of materials — all of that’s going to come into play up and down the stack — from network to Kubernetes clusters to application monitoring and security. It’s going to be a completely different ballgame in the next five years.

But what of the unintended consequences? Arielle Waldman is glad you asked — “RSAC panel warns AI poses unintended security consequences”:

“Panelists emphasized their concerns”
While a panel of experts at RSA Conference 2023 touted generative AI for a host of security uses including incident response, they also warned the rapid adoption of the technology will present unintended consequences. … Ram Shankar Siva Kumar, data scientist in Azure Security at Microsoft; … Vijay Bolina, CISO at Google DeepMind; Rumman Chowdhury, founder of Bias Buccaneers; and Daniel Rohrer, vice president of software security at Nvidia … addressed if and how security can keep pace with the whirlwind of large language model use.

Chowdhury … attributed the heightened use of generative AI to enterprise’s needs for critical thinking and fast analysis, which the technology does address. On the other hand, the panelists emphasized their concerns, such as the potential for joblessness in particular fields, inherent bias, and even “hallucinations,” [which] occur when an LLM provides responses that are inaccurate responses or not based in facts.

The Cryptographers’ Panel is always a good listen. Iain Thomson channels the aponymous Mr. S. — “RSA’s Adi Shamir thinks we’re safe for a generation”:

“No evidence whatsoever”
Adi Shamir … the “S” in “RSA” … opined that in the 1990s he saw three big issues: … AI, cryptography, and quantum computing. Two out of three had delivered [but] quantum computing has yet to show promise.

He wasn’t alone in his skepticism. British mathematician Cliff Cocks, who developed public-key cryptography years before session host Dr Whitfield Diffie and his colleagues came up with the same idea, was somewhat cutting about stories that the Chinese have developed quantum systems to crack current encryption systems [and] there’s “no evidence whatsoever” that it would work on a larger scale.

In fact, Shamir’s position on AI is a bit more nuanced. Mathew J. Schwartz explains — “Cryptographers’ Panel Talks Quantum Computing and AI”:

“Ascending the hype scale”
Shamir said until last year, he thought AI might have some use cases purely on the defensive side of cybersecurity, and very few offensive use cases. “I’ve completely changed my mind as a result of last year’s developments, including ChatGPT, etc.,” he said. “I now believe that the ability of ChatGPT … to interact with people is going to be misused on a massive scale” and to “have a major impact on social engineering.”

“What they seem to be pretty good at is human engineering,” said Whitfield Diffie. … If ChatGPT is ascending the hype scale, blockchain’s star seems to be falling. “Blockchain has been having a bad year,” Diffie said.

AI SchmAI. TeeCee is unimpressed:

The LLM/ML products may give a passable impression of intelligence, but they’re no more than idiots savant — at best.

Naturally, Bruce Schneier is flogging his  — “hacking democracy” schtick again:

“Increasingly not OK”
Imagine if we had an AI … that voted on our behalf 1,000 times a day, based on preferences it inferred we have. … It would be just an algorithm for converting individual preferences into policy decisions. … Any AI system should engage individuals in the process of democracy, not replace them.

[But AI] systems are super opaque, and that’s become increasingly not OK. … Even if we know the training data used and understand how the model works, there are all these emergent properties that make no sense. … I don’t know how to solve all these problems. But this feels like something that we as security people can help the community with.

Meanwhile, osxtra donates an earworm:

Oh, you better watch out, you better not spam, you better not phish, I’m telling you ma’am …

And Finally:

Slightly cheesy this year

 

Previously in And finally

You have been reading Secure Software Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi or [email protected]. Ask your doctor before reading. Your mileage may vary. Past performance is no guarantee of future results. Do not stare into laser with remaining eye. E&OE. 30.

Image sauce: Minda Haas Kuhlmann (cc:by; leveled and cropped)

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Richi Jennings. Read the original post at: https://www.reversinglabs.com/blog/rsac-is-bustling-ai-and-security-is-huge-strongertogether

Secure Guardrails