No Deep AI Security Secrets In This Post!
I am not an AI security expert (I hear there are very few of those around). I am essentially a motivated amateur learner in AI security … and I would even trust Bard advice on Artificial Intelligence security (well, that’s a joke — still, you can see what it says anyhow)
However I was a pretty good analyst, and some say that this is kinda a minor superpower 🙂
So, in this post, I will share some things that puzzle me in this emerging domain, and I will use the 3 podcast episodes we did on securing AI as evidence. Note that all of them predate the current LLM craze. BTW, if you have anything fun to say about LLM security (easy!) and you actually know what you are talking about (hard!), talk to us 🙂
These are the episodes:
- EP52 Securing AI with DeepMind CISO
- EP68 How We Attack AI? Learn More at Our RSA Panel!
- EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far
Based on these episodes, you can see things that interest me the most as we asked them of every guest:
- What is different about securing AI vs securing another data-intensive, complex, enterprise application?
- What portion of AI-related “badness” (harm, risk, etc) fits within the cybersecurity domain?
Since I promised to provide no answers in this blog, let me do more questions:
1a What aspects of securing AI are unchanged from securing, well, anything else?
1b1 What aspects of securing AI are different because the threats are different?
1b2 What aspects of securing AI are different because the technology is different?
1b3 What aspects of securing AI are different because the use cases emerge so quickly?
1b4 What aspects of securing AI are different because, well, because it is the freaking AI? 🙂
2a When somebody is trying to attack the AI you use, you call a CISO, so … do you call a CISO if …
2a1 AI is producing content that somebody does not like?
2a2 AI is being used for illegal purposes?
2a3 AI is being used against privacy rules and conventions?
2a4 AI does something that is perceived as societal harm by somebody?
2a5 AI does something else somebody does not like?
More questions? Yes, there will be more!
Miscellaneous public reading on the topic I found useful:
- The AI Attack Surface Map v1.0
- Market Guide for AI Trust, Risk and Security Management (BTW, Gartner takes a wisely broad view of the related badness [“adverse outcomes”], but this IMHO dilutes the focus somewhat…not sure what is best)
- AI Security: How to Make AI Trustworthy
- A CISOs Guide: Generative AI and ChatGPT Enterprise Risks
- The Road to Secure and Trusted AI
- Supercharging security with generative AI
- Defending AI Models: From Soon To Yesterday
P.S. If this feels like an incomplete thought blog, yeah … this is one of those!
No Deep AI Security Secrets In This Post! was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.
*** This is a Security Bloggers Network syndicated blog from Stories by Anton Chuvakin on Medium authored by Anton Chuvakin. Read the original post at: https://medium.com/anton-on-security/no-deep-ai-security-secrets-in-this-post-d9af9e38b7a0?source=rss-11065c9e943e------2