SBN

Talking about ChatGPT with your colleagues

I wonder how many security teams have reached out to their colleagues about the use of ChatGPT and other hot new generative AI tools. Here’s what I sent to the folks at my company to provide some guidance from a security and risk point of view the other day…

Yes, I know ChatGPT is the new hotness… but as our firm’s resident pooper of ALL parties, I would like to throw out some security related thoughts on our new robot assistants/overlords.

First and foremost:  All of the information that you enter into ChatGPT (and probably any other AI assistant) is saved and may be used to “improve service.” It is not inconceivable that your prompts could be used for further training of the model and thus could be exposed to other users some day.  So… do not “tell” these AI assistants anything about the firm that has not already been made public.

Data freshness:  Remember that the ChatGPT AI model stopped learning a couple of years ago and is not aware of things that happened after it was last trained in 2021.  Asking it questions about recent events (especially about dynamic topics like cryptocurrency, the war in Ukraine, etc.) will not get you accurate or timely results. 

Accuracy:  ChatGPT was trained on Internet data, which ranges from really good and accurate to (how shall I say this delicately) total crap.  While efforts have been made to sort the good from bad, the bot has been shown to sometimes provide inaccurate results.

Coding:  While ChatGPT can provide some assistance with writing and debugging code, it has definite limitations.  From a security point of view, there is at least one study which suggests that code generated by AI is more likely to contain security vulnerabilities than code generated by humans.  And uploading proprietary code to an AI assistant for debugging is something you definitely should NOT be doing (see “First and foremost” above).

Availability:  Finally, remember that you can’t depend on ChatGPT being available when you need it  – it is a beta, is frequently over capacity, and eventually the ChatGPT folks are going to want to make some money and will charge for this thing.

We are still at the beginning of the journey when it comes to generative AI assistants like ChatGPT and we are going to see lots of really cool positive use cases for these tools as well as potential issues and risks.  If you think you have found a cool new way to use ChatGPT or other AI assistants to revolutionize our business, let’s discuss it and figure out how to make it safe!

*** This is a Security Bloggers Network syndicated blog from Al Berg's Paranoid Prose authored by Al Berg. Read the original post at: https://paranoidprose.blog/2023/01/18/talking-about-chatgpt-with-your-colleagues/

Secure Guardrails