Hackers Exploit Generative AI to Spread RedLine Stealer MaaS

As generative AI platforms like OpenAI’s ChatGPT and Google Bard continue to dominate the headlines—and pundits debate whether the technology has taken off too quickly without necessary guardrails—cybercriminals are showing equal interest and no hesitance in exploiting them.

Not surprisingly, then, security researchers at Veriti uncovered “a new malware-as-a-service (MaaS) campaign that leverages the popularity of these AI platforms to distribute a strain of malware known as RedLine Stealer,” they wrote in a recent blog post.

“As generative AI platforms continue to scale their user bases, many businesses are still wrapping their heads around how to integrate these new technologies into their current models,” said George Jones, CISO at Critical Start. “In parallel, threat actors are moving faster and are already finding ways to leverage the advanced technology for malicious purposes.”

The RedLine Stealer campaign “has demonstrated that the latest AI advancements are not immune to malicious exploitation,” Veriti researchers said.

Why wouldn’t they? The potential for disruption and profit is impressive. Generative AI platforms can package AI in a file such as mobile applications or as open source—and that’s troubling, Veriti researchers said, because it “creates the perfect excuse for malicious actors to trick naïve downloaders.”

That means “the potential impact of such attacks is significant, as hackers could steal confidential data, compromise financial accounts, or even disrupt critical infrastructure,” the researchers wrote. “Moreover, these attacks are becoming more sophisticated, making detecting and preventing them harder.

In the case of MaaS RedLine Stealer, cybercriminals steal data from compromised devices. “This type of service allows even individuals with limited technical knowledge to launch sophisticated cyberattacks,” researchers said.

Because “the MaaS ecosystem operates through online forums that act as marketplaces for malicious actors to advertise their malware and stolen data,” researchers said, the “forums offer a range of services, including access to malware, stolen data and even hacking tools” and the forum administrators can serve as “intermediaries between buyers and sellers, earning a percentage of the profits from the sale of stolen data or malware.”

What Veriti observed is that malicious actors have increasingly turned to the Telegram messaging app to purchase and deploy the malware, because it provides greater anonymity and encryption capabilities.

The RedLine Stealer malware “is designed to steal sensitive information from web browsers, including credit card details, saved credentials and autocomplete data,” they wrote.

“In addition, it can take an inventory of the target machine, gathering information on the user, location, hardware and installed security software,” the researchers explained. “The malware can upload and download files, execute commands and send back information about the infected computer at regular intervals.”

In this particular campaign, the hackers steal the credentials of Facebook business or community accounts—those that have followers in the thousands or more. “Using these pages, the malicious actors spread sponsored posts promoting free downloads of ‘alleged’ ChatGPT or Google Bard-related files,” researchers noted. “These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the file.”

But after a user downloads and extracts the file, “RedLine Stealer malware is activated and can steal passwords and download further malware onto the user’s device,” Veriti said. “This method of attack has proven to be particularly effective in spreading malware and gaining access to sensitive information, as dozens of Facebook business accounts have already been hijacked for these purposes.”

Once they’ve seized control of legitimate business pages, the “attackers can gain the trust of the page’s followers and use that trust to distribute malware disguised as legitimate software,” the researchers said.

“While this seems like a case of ‘something new,’ it’s really not. It’s a classic case of a threat actor jumping on a hot trend (this time, generative AI tools) and using already-established techniques and tools to trick people into downloading malware,” said Heath Renfrow, cofounder at Fenix24.

“These threat actors are using malware-as-a-service (MaaS)—something they did not create—and using malicious ads seemingly from established organizations to provoke a sense of trust,” said Renfrow. “This is a technique that we have been seeing a lot recently. It does combine a lot of hot security topics: MaaS, malicious ads and AI (which has security issues of its own).”

Indeed, the miscreants “are relying on people getting hooked and downloading the malware when they expect an AI chatbot application,” said Mike Parkin, senior technical engineer at Vulcan Cyber. “This is not about ChatGPT or Google Bard. This is about threat actors using a ‘top-of-mind’ subject line to draw in their targets,” said Parkin. “The reality is, if they didn’t have these AI chatbots, they would find something else to use.”

Still, “with great power comes great responsibility,” Veriti researchers said, noting that the RedLine Stealer campaign “highlights the need for increased cybersecurity measures and awareness to protect against this emerging threat.”

Organizations would be wise to need to “recognize that the availability of MaaS and other types of tactics, threats and procedures, is growing quickly and becoming more difficult to detect. If you can make a purchase on Amazon, you can buy and deploy malware,” said Jones.

“Companies must pivot their focus to protecting their organizations from falling victim to such attacks. Relying on cybersecurity webinars and training for users is a great first step, but it isn’t enough,” he said. “Security monitoring must get tougher while organizations take time to get smarter. They must prioritize an increase in security processes, follow a zero-trust model and look to solutions that monitor their infrastructure 24/7 to flag suspicious activity and notify staff immediately.”

Avatar photo

Teri Robinson

From the time she was 10 years old and her father gave her an electric typewriter for Christmas, Teri Robinson knew she wanted to be a writer. What she didn’t know is how the path from graduate school at LSU, where she earned a Masters degree in Journalism, would lead her on a decades-long journey from her native Louisiana to Washington, D.C. and eventually to New York City where she established a thriving practice as a writer, editor, content specialist and consultant, covering cybersecurity, business and technology, finance, regulatory, policy and customer service, among other topics; contributed to a book on the first year of motherhood; penned award-winning screenplays; and filmed a series of short movies. Most recently, as the executive editor of SC Media, Teri helped transform a 30-year-old, well-respected brand into a digital powerhouse that delivers thought leadership, high-impact journalism and the most relevant, actionable information to an audience of cybersecurity professionals, policymakers and practitioners.

teri-robinson has 196 posts and counting.See all posts by teri-robinson