SBN

Leaking company secrets via generative AIs like ChatGPT

For a third party, knowing what people from company X are asking of ChatGPT (or any other generative AI) could be quite interesting and profitable (as well as damaging to company X). Some scenarios which come to mind:

  • Product team member chats with an outside AI about ideas for new products or services.
  • Sales team member chats with an outside AI to find potential new customers.
  • M&A team member uses an outside AI to help with due diligence on a potential acquisition.
  • Finance team member uses an outside AI to assist with modeling tasks.
  • Marketing team member uses an outside AI to draft future press releases.
  • HR uses the AI to write job descriptions for future hires – or inquire about laws surrounding mass layoffs.
  • Software developer uses AI tools to aid in writing or debugging code for an unannounced feature or product.

These are pretty simplistic use cases – as AI models get more sophisticated and useful, the applications (and potential information leakages) will become more serious. And figuring out who works for company X is easy peasy using LinkedIn and data from previous breaches.

Use of these tools will take conversations and tasks which generally have stayed within organizational walls/systems and place them outside, in nicely aggregated and easily identifiable locations which can be targeted by malicious actors – “watering holes” that can be mined directly or via middlemen. In the case of chat based AI tools, the text of the chats could provide useful and valuable insight into the thought processes of key individuals within the firm. And the information gained could be leveraged for further attacks, social engineering, direct competition, poaching (or infiltrating) talent, market manipulation, or other applications.

There are a number of potential paths that a malicious actor might take to leverage these “opportunities:”

  • Set up a generative AI service and offer it for free or at a low cost – this would be pretty expensive to do, so this might be the go to option for nation state actors.
  • Get someone on the inside of a legitimate provider to supply the information. Intelligence services have decades of experience in this kind of operation, but less well funded/sophisticated actors could also infiltrate firms or pay/coerce existing employees to supply the desired information.
  • Compromise the systems of a legitimate provider and exfiltrate logs.
  • Deploy malware on target organizations’ systems and read conversations between employees and AIs.

I know this may sound a bit like the plot of a dystopian corporate cyberpunk movie – but these tools (which also have the potential for positive applications) lower the bar for both state sponsored and non state threat actors to adopt techniques which used to be limited to the most sophisticated intelligence services. Companies considering adoption of these tools need to weigh the risks as well as the benefits and make plans to protect themselves. And even if your organization has not planned to adopt these tools, your employees may have already made the decision for you – now is the time to look at the web logs, talk to employees about potential risks and plan for risk mitigation.

*** This is a Security Bloggers Network syndicated blog from Al Berg's Paranoid Prose authored by Al Berg. Read the original post at: https://paranoidprose.blog/2023/01/21/leaking-company-secrets-via-generative-ais-like-chatgpt/