SBN Big Tech Admits Security Teams Politically Directed and Intentionally Blind to Hate Groups

My head hurt when I read a new “insider” article on detecting and preventing hate on big data platforms. It’s awful on many, many levels.

It’s like seeing a story on airplane safety where former staff reveal they couldn’t agree politically on how to measure gravity in a way that appeased a government telling them that up is down. Or that a crash in 2018 made them aware of risks — as if nothing ever crashed before a year or two ago.

Really? You just figured out domestic terrorism is a huge problem? That says a lot, a LOT. A Civil War was fought after decades of terrorism and it continued again after the war ended, and there’s a long rich history of multi-faceted orgs conspiring and collaborating to undermine democracy. And that’s just in America.

I’m not going to give away any insider secrets when I say this piece provides some shockingly awful admissions of guilt from tech companies that facilitated mass harms from hate groups and allowed the problem to get far worse.

Here’s a quick sample:

…companies defined hate in limited ways. Facebook, Twitter and YouTube have all introduced hate speech policies that generally prohibit direct attacks on the basis of specific categories like race or sexual orientation. But what to do with a new conspiracy theory like QAnon that hinges on some imagined belief in a cabal of Satan-worshipping Democratic pedophiles? Or a group of self-proclaimed “Western chauvinists” like the Proud Boys cloaking themselves in the illusion that white pride doesn’t necessarily require racial animus? Or the #StoptheSteal groups, which were based on a lie, propagated by the former president of the United States, that the election had been stolen? These movements were shot through with hate and violence, but initially, they didn’t fit neatly into any of the companies’ definitions. And those companies, operating in a fraught political environment, were in turn slow to admit, at least publicly, that their definitions needed to change.

Limited ways of defining hate to benefit who? What’s the downside to being less limited?

In other words “movements were shot through with hate and violence” and companies say they were stuck worrying “what to do”. They saw hate and violence. Then they figured they was a way to not do anything about it.

Should be obvious without a history degree why that’s a dangerous disconnect.

The article even says shutting down hate groups risked tech workers facing threats of attack, as if that made them want to give into the bully tactics instead of confirmed that they were on the right path by shutting them down.

Indeed, what good is it to say hate speech policies prohibit direct attacks if movements full of hate and violence haven’t “direct attacked” someone yet? You’re not really prohibiting, are you? It’s like saying you prohibit plane crashes but the plane hasn’t crashed yet so you can’t stop a plane from crashing.

Seriously. That’s not prohibiting attacks, that barely rises to even detecting them.

Kind of like asking what if you hear a pilot in the air saying “gravity is a lie, a Democratic conspiracy…” instead of the pilot saying “I hate the people in America so this plane is going to crash into a building and kill people”.

Is it really a big puzzle whether to intervene in both scenarios?

I guess some people think you have to wait for the crash. They shouldn’t be in charge of other people’s safety. Nobody should sit comfortably if they say “hey, we could and should have stopped all that harm, oops!”

How does the saying go…”never again, unless a definition is hard”? Sounds about right for these tech companies. What they really seem to be revealing is an attitude of “please don’t hold me responsible for wanting to be liked by everyone, or for wanting an easier job” and then leaving the harms to grow.

You can’t make this stuff up.

And we know what happens when tech staff are so cozy and lazy that they refuse to stop harms, obsessing about keeping themselves liked and avoiding hard work of finding flaws early and working to fix them.

The problem grows dramatically, getting significantly harder. It’s the most basic history lesson of all in security.

FBI director says domestic terrorism ‘metastasizing’ throughout U.S. as cases soar

Perhaps most telling of all is that people tried to use fallacies as their reason for inaction. If they did something, they reasoned falsely, it could turn into anything. Therefore they chose to do nothing.

Inside YouTube, one former employee who has worked on policy issues for a number of tech giants said people were beginning to discuss doing just that. But questions about the slippery slope slowed them down. “You start doing it for this, then everybody’s going to ask you to do it for everything else. Where do you draw the line there? What is OK and what’s not?” the former employee said, recalling those discussions.

Slippery slope is a fallacy. You’re supposed to say “hey, that’s a fallacy, and illogical” as opposed to sitting on your hands because it was used in a debate. Like someone saying “here’s a strawman” and YouTube staff then disclosing how their discussions centering around how they must defeat that strawman.

That is not how fallacies are supposed to be handled. After all, if slippery slope was a real thing we should turn off YouTube entirely because if you watch one video on fluffy kittens next thing you know you’re eyeballs deep into KKK training videos. See what I mean? Fallacy not even worth time.

And this pretty much sums up the Facebook nonsense about how they’re ad targeting geniuses but were completely blind to the fact that it was pushing violence:

“Why are they so good at targeting you with content that’s consistent with your prior engagement, but somehow when it comes to harm, they become bumbling idiots?” asked Farid, who remains dubious of Big Tech’s efforts to control violent extremists. “You can’t have it both ways.” Facebook, for one, recently said it would stop recommending political and civic groups to its users, after reportedly finding that the vast majority of them included hate, misinformation or calls to violence leading up to the 2020 election.

Vast majority of Facebook “civic groups” included hate, misinformation or calls to violence. That’s no accident. I’m go out on a limb here and give another explanation, borrowed from psychologists who research how people respond to uncomfortable truths:

In seeking resolution, our primary goal is to preserve our sense of self-value. …dissonance-primed subjects looked surprised, even incredulous [and] discounted what they could see right in front of them, in order to remain in conformity with the group…

*** This is a Security Bloggers Network syndicated blog from flyingpenguin authored by Davi Ottenheimer. Read the original post at: https://www.flyingpenguin.com/?p=32344