Facebook Lied About Encryption and Mines Outrage for Profit - Security Boulevard

Facebook Lied About Encryption and Mines Outrage for Profit

There are two important and connected ethics stories in the news lately about Facebook management of user security.

The first is what I’ve been telling people about WhatsApp for several years now. The design of the product had a backdoor built-in and barely obscured.

On one recent call with a privacy expert and researcher they literally dropped off when I brought this fact up. After they went and did some digging they jumped back on that call and said “shit you’re right why aren’t people talking about this”. Often in security it’s unpleasant to be correct, and I have no idea why people choose the things to talk about instead.

I mean it was never much of a secret. Anyone could easily see (as I did, as that researcher did) the product always said if someone reported something they didn’t like when connected to another person their whole chat could be sent to Facebook for review. In other words a key held by a third party could unlock an “end-to-end” encrypted chat because of a special reporting mechanism.

That’s inherently a backdoor by definition.

A switch was designed such that a third party would enter secretly to have a look around in a private space, including if that someone flipping the switch to gain entry is the third party themselves (nothing Ive seen so far proved Facebook couldn’t initiate it without consent, meaning they could drop in whenever they wanted).

Apparently this has finally become mainstream knowledge, which is refreshing to say the least. It puts to bed maybe that the Facebook PR machine for years has been spitting intentional bald-faced lies.

WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.

Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.” It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala ​​Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.”

If you think that sounds awful. Here’s a flashback to a chalkboard-screeching 2019 tweet that may as well be from the ex-head of safety of a tobacco company on an investor/politics tour to claim how the mint-flavored cigarette filter was the most health-preserving thing of all time.

Source: Twitter

A very strange fact is that this is the same person who recently was pushed forward by Facebook and Stanford into the NYT to attack Apple regarding engineering privacy-protections to protect children from harms.

Hypocrisy? It doesn’t get much worse, as others already have pointed out about Facebook executives who seem to gin up bogus outrage for profit.

The second one is thus that Facebook is finally starting to face the book — it has been creating vitriol and outrage this whole time for self-gain and profit, carelessly using technology in a manner obviously counter-productive to health and safety of society.

What the AI doesn’t understand is that I feel worse after reading those posts and would much prefer to not see them in the first place… I routinely allow myself to be enraged… wasting time doing something that makes me miserable.

This is seconded by research of Twitter proving social media platforms effectively train people to interact with increasing hostility to generate attention (feeding a self-defeating social entry mechanism, like stealing money to get rich).

If you feel like you’re met with a lot of anger and vitriol every time you open up your social media apps, you’re not imagining it: A new study shows how these online networks are encouraging us to express more moral outrage over time.

What seems to be happening is that the likes, shares and interactions we get for our outpourings of indignation are reinforcing those expressions. That in turn encourages us to carry on being morally outraged more often and more visibly in the future.

What this study shows is that reinforcement learning is evident in the extremes of online political discussion, according to computational social psychologist William Brady from Yale University, who is one of the researchers behind the work.

“Social media’s incentives are changing the tone of our political conversations online,” says Brady. “This is the first evidence that some people learn to express more outrage over time because they are rewarded by the basic design of social media.”

The team used computer software to analyze 12.7 million tweets from 7,331 Twitter users, collected during several controversial events, including debates over hate crimes, the Brett Kavanaugh hearing, and an altercation on an aircraft.

For a tweet to qualify as showing moral outrage, it had to meet three criteria: it had to be a response to a perceived violation of personal morals; it had to show feelings such as anger, disgust, or contempt; and it had to include some kind of blame or call for accountability.

The researchers found that getting more likes and retweets made people more likely to post more moral outrage in their later posts. Two further controlled experiments with 240 participants backed up these findings, and also showed that users tend to follow the ‘norms’ of the networks they’re part of in terms of what is expressed.”

*** This is a Security Bloggers Network syndicated blog from flyingpenguin authored by Davi Ottenheimer. Read the original post at: https://www.flyingpenguin.com/?p=35199