Deepfakes are Literally Security Theater - Security Boulevard

SBN Deepfakes are Literally Security Theater

Have you been to a theater lately? Probably not because of the pandemic, but if you remember when we all used to go (including movie theaters, of course) we would watch performance art and like it (assuming it was well done and believable, of course).

However, I sure see a lot of people getting very upset about something they call Deepfakes.

FinConDX 2021

Why is there such a disconnect between all the people paying money and spending time to be entertained by the performing arts (the act of information deception) and the people decrying our future will be ruined by Deepfakes (the act of information deception)?

I call this the chasm of information security, which I’ve been sounding the alarm on here and in my presentations around the world since at least 2012. It is the foundation of my new book, which I started writing at that time and has expanded greatly from just a warning call to tangible solutions.

We are long past the time when security professionals should have been talking about the dangerous and controls to integrity risks. It is evidence of failure that people can both be entertained by information deception without any worry on one hand and on the other hand decry it as a dangerous future if we allow it to continue.

Is the court jester the end of the kingdom? Obviously not. Is the satirist or political comedian the end of the future? Obviously not.

When an actor changes their voice is it more or less concerning than when they change their appearance to look like the person they are attempting to represent accurately?

Watching a Deepfake for me is like going to the theater or watching a movie and I fear it very little, perhaps because I study intensely all the ways we can protect ourselves against willful harm.

Integrity is a problem, a HUGE problem. Yet let me ask instead why are people so worried that performance art, let alone all art, is being artistic?

A headline like this one is not concerning for me any more than usual:

A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.

“It was super easy actually,” he says, “which was the scary part.”

Yes, it’s called Wikipedia. Lots of college kids are generating fake content and fooling millions. Using technology to generate the content makes it faster and easier, sure, but it’s not far from the original problem.

The bigger problem is that people don’t often enough describe GPT-3 as a fire-spewing dumpster fire that was created without any sense of fire suppression. It’s a disaster.

When I was in Japan trying to solve for information system risks I couldn’t speak of insider attacks like this because everyone there simply told me no such thing existed.

Their culture was said to have deeply ingrained trust and honor systems and they confidently believed they could detect any deviation (hard to argue with given how they marched into the room and sat by rank and respect from middle to end of the table).

So instead I watched a history documentary about how Osaka castles had been destroyed by invaders and the next meeting I brought up the dangers of imposters, deceptive fakes inside their organization. This hit a nerve.

Suddenly everyone was waving money at me saying take it and help them protect against such imminent dangers.

It is a massive failing of the security industry how people worry about data integrity and feel afraid like they have no tangible answers, yet they surround themselves with art all day every day and “like” it. We have the answers right in front of us.

Again, that’s the chasm of information security today. I hope to explain in great detail what needs to be done about this fear of theater, in my upcoming book.


*** This is a Security Bloggers Network syndicated blog from flyingpenguin authored by Davi Ottenheimer. Read the original post at: https://www.flyingpenguin.com/?p=30747