Scientists Use AI Chatbots to Carry Encrypted Messages Undetectable by Cybersecurity Systems
The world has a long history of hiding messages in plain sight. My own crude attempts as a kid included hours spent inserting code words and number sequences into notes and messages to avoid detection by parents, teachers and other kids. And occasionally whipping out my Batman decoder ring to figure out messages being hidden from me.
“From Ceaser ciphers tattooed to the messenger to hidden codes within news broadcasts,” says BeyondTrust Field CTO James Maude, this technique has been around as long as people have been able to write. Over the years, the medium has changed.
Perhaps that’s why, although cool, it’s not surprising that scientists have managed to get AI chatbots like ChatGPT to carry encrypted messages that are undetectable by cybersecurity systems, according to a report in Live Science. That has huge implications for securing communications, particularly as the report quotes its creators as saying, “in scenarios where conventional encryption mechanisms are easily detected or restricted.”
Pointing to centuries-old practices of hiding secret messages like “marking certain letters in a letter or using invisible ink,” J Stephen Kowski, field CTO at SlashNext Email Security, says what’s different here is “AI can make these hidden messages blend in even better, making them much harder to spot.”
What the scientists have done is create an EmbedderLLM system to put ciphers in fake messages generated by AI, but to be from humans and readable only by those with permission to access via private key or password. The messages can be sent using virtually any texting or chat app or platform, the Live Science report noted.
“Most cryptographic techniques and algorithms are detectable and potentially breakable with quantum computing,” says Satyam Sinha, CEO and cofounder of Acuvity. “The novelty of this idea is that it can simply evade techniques existing today as it hides that encryption is used.”
But Sinha sees obvious limitations. “It is invisible and computationally orders of magnitude harder,” he says, but because it “depends on existing cryptographic mechanisms, it’s not necessarily stronger.”
And the advance raises concerns about the danger of the same technique falling into the wrong hands. Recognizing the potential for specific use cases, Sinha says, as is true with all techniques, “there are bigger concerns of the techniques being used by the bad guys or for unethical causes, as it can evade deep packet inspection and can bypass censorship.”
Indeed, the same merits that make the technique valuable in protecting communications in hostile environments can, as Maude contends, be used “to communicate hidden messages that could be used to exfiltrate information or to connect to command and control (C2) infrastructure through approved corporate systems.” That’s akin to a familiar scenario for defenders, he says — malware using hidden messages delivered via blogs and social media to issue remote commands and harvest information.
The security pros who weighed in on this one caution security teams to proceed carefully and shore up defenses. “The risk of data exfiltration serves as a timely reminder of the importance of least privilege and just-in-time (JIT) access to ensure that data cannot be maliciously accessed and nefariously smuggled out using these or any other techniques,” says Maude.
And Kowski advocates for using “security that can look for odd patterns or sneaky tricks, not just obvious threats.”
For now, though, the risk from this new technique is low — the scientists who created it don’t expect it to be put to use in the real world any time soon.
But, hey, it’s never too early to brush up on security to prepare for the future…and break out that Batman decoder ring.