SBN

Safety First: 3 Steps to Enable ChatGPT and other Generative AI Tools

illustration of robot with AI in brain

According to Deloitte, 42% of companies are currently experimenting with Generative AI, and 15% are actively incorporating it into their business strategy. It’s no secret that artificial intelligence (AI) is transforming the way we work. In just a few years, new chatbots, Generative AI and other tools are helping users streamline workflows, optimize business processes, create personalized content at scale and make code more efficient.

However, the rapid evolution and adoption of AI tools is also changing the cybersecurity landscape – requiring organizations to rethink how they can protect users, data and systems safe from malicious attacks. In fact, the Deloitte survey found that Generative AI risk and internal controls were the top concerns of adopting the new tools, and the Biden administration recently issued guidelines on how to safely enable Generative AI tools.

Generative AI risks are changing the cybersecurity landscape

As users rush to take advantage of these new tools, few people are taking the time to consider how Generative AI puts the organization at great risk. It’s important to consider that AI learns through the data that users input into it. This includes raw data – such as source code, customer information, engineering specs, branding, messaging and positioning and other proprietary information. AI tools use this information to inform output for other users – including malicious actors.

For example, it was reported that a Samsung engineer pasted internal source code into ChatGPT in an effort to identify errors. While the engineer may have made the code more efficient, the information can now be used to train models further and be served to other users – potentially exposing sensitive engineering data to competitors. And, with the Internet being forever, it’s highly unlikely that Samsung will ever be able to expunge the data from these models – even if ChatGPT owners were open to helping out.

Even seemingly innocent information – such as a company logo, messaging and positioning and business strategies – can help malicious actors build more convincing phishing emails, false sign-in forms or adware. With access to the right source material, ChatGPT can create pretty convincing fakes that can be used to trick users into clicking on a link in an email or entering their credentials in a fake form.

Generative AI Security Best Practices

Organizations have been responding to Generative AI risks like they always respond to new things: they block them. In addition to Samsung, a recent survey from Blackberry shows that 75 percent of organizations are currently implementing or considering bans on ChatGPT and other Generative AI applications within the workplace. Even entire countries are instituting bans as a public safety issue. This is all well and good, and probably improves these organizations’ Generative AI security posture, but banning the use of these tools hampers innovation, productivity and competitiveness.

Fortunately, there is a middle ground. Here are three steps that you can take to enable the use of ChatGPT and other Generative AI tools without putting the organization at increased risk:

1. Educate users

With any new technology, most users don’t understand how Generative AI tools work and why they should be careful about the content they input into them. Educating them about how their inputs are used to inform future requests will cause them to think twice about pasting in proprietary information. Engineers know not to share source code on a public forum, and it’s not a huge leap for them to apply the same logic to Generative AI once they understand the danger.

2. Implement (DLP) policies

Once users have made the connection between Generative AI tools and potential data loss, it’s a logical next step to extend DLP policies to these tools. Already codified into data use policies, the infrastructure is already in place and provides a foundation for protecting the organization from losing proprietary data in Generative AI tools.

3. Gain visibility and control

There is also an opportunity to expand DLP policies beyond simply checking for keywords. You need a layered approach that gives you visibility into how users are interacting with these Generative AI tools and the control to stop them from doing something careless. This includes better detection capabilities and the ability to prevent users from pasting large blocks of text into web forms.

A way forward

Generative AI tools are making users more efficient, productive and innovative – but they also pose a significant risk to the organization. Simply blocking these tools puts people at a competitive disadvantage, so cybersecurity teams need a nuanced strategy for protecting users, data and systems. Educating users, extending DLP policies to the new technology and then gaining visibility and control into users’ interactions with Generative AI tools can help cybersecurity teams better protect the organization without limiting productivity.

Download report: how employee usage of generative AI is impacting security posture

The post Safety First: 3 Steps to Enable ChatGPT and other Generative AI Tools appeared first on Menlo Security.

*** This is a Security Bloggers Network syndicated blog from Menlo Security authored by Negin Aminian. Read the original post at: https://www.menlosecurity.com/blog/3-steps-enable-chatgpt-other-generative-ai/