Will Facebook enforce it’s updated “remove, reduce, and inform” policy to curb fake news and manage problematic content?
Facebook announced updates to it’s “remove, reduce, and inform” strategy to better control “problematic” content and fake news across Facebook, Instagram, and Messenger. No new tools or updates have been announced for Whatsapp. By problematic content, they mean reducing the spread of content that is inappropriate but does not violate their community guidelines. Similarly, for Instagram, the company is reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines. These posts will not be recommended on the Explore and hashtag pages but can still appear in your feed if you follow the account that posts it. For instance, the company adds, “ a sexually suggestive post will still appear in Feed but may not appear for the broader community in Explore or hashtag pages.”
They disclosed this news to a small group of journalists in an event organized at Menlo Park, on Wednesday. “This strategy”, Facebook said, “applies not only during critical times like elections but year-round.”
Last week, WhatsApp included a ‘Checkpoint Tipline‘ feature in India to verify messages during the election. “Launched by PROTO, an India-based media skilling startup, this tip line will help create a database of rumors to study misinformation during elections for Checkpoint,” Facebook said in a statement.
However, the tool turned out to be more for researching purposes rather than debunking fake news as reported in an investigation led by BuzzFeed News. Per Buzzfeed, FAQs uploaded on Pronto website suggests it’s just meant for research purposes.
Increasing overall product integrity
Facebook has rolled out a Community Standards site where people can track the updates Facebook makes each month. All policy changes will be visible to the public with specifics on some on why they made a certain change.
Facebook Groups admins will be held more accountable for Community Standards violations. Facebook will be looking at admin and moderator content violations in a group while deciding whether or not to take it down. They will be checking approved member posts as a stronger signal that the group violates facebook standards. This feature is also released globally.
A new Group Quality feature will provide an overview of content removed and flagged for most violations. It will also have a section for false news found in the group. This initiative is going to start globally in the coming weeks.
They are also expanding their third-party collaborations for news flagging and fact-checking by including The Associated Press as part of the third-party fact-checking program. AP will be debunking false and misleading video misinformation and Spanish-language content appearing on Facebook in the US. Surprisingly, fact-checking by AP has not been added as a feature globally. India is Facebook’s largest market and is also conducting its national elections over this month and the next. Current fact checking agencies in India include AFP India, Boom, Fact Crescendo, Factly, India Today Fact Check, Newsmobile Fact Checker, and Vishvas.News.
Facebook has made admin and moderator policies as well as the Group Quality feature made available globally, but not the AP inclusion.
Read also: Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late?
If a Facebook group is found to repeatedly share misinformation that has been rated false by independent fact-checkers, Facebook will reduce that group’s overall News Feed distribution. Interestingly, they have not suspended these groups as they are only removing/suspending content that “violates their policies”, even if it’s deemed inappropriate.
A new “Click-Gap” signal into News Feed ranking will be incorporated to see less low-quality content in their News Feed. Per Facebook, “This new signal, Click-Gap, relies on the web graph, a conceptual “map” of the internet in which domains with a lot of inbound and outbound links are at the center of the graph and domains with fewer inbound and outbound links are at the edges. Click-Gap looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph. This can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content.”
Specifically for Facebook and messenger apps
The Context Button feature is now added to images to provide people more background information about the publishers and articles they see in News Feed. Facebook is testing this feature for images that have been reviewed by third-party fact-checkers. Trust Indicators are also added to the Context Button to provide a publication’s fact-checking practices, ethics statements, corrections, ownership and funding, and editorial team. They are created by a consortium of news organizations known as the Trust Project. This feature started in March 2019, on English and Spanish content.
Facebook will also be adding more information to the Page Quality tab starting with info on Page’s status with respect to clickbait. Facebook will also allow people to remove their posts and comments from a group after they leave the group.
The Verified Badge is now officially a part of Messenger as a visible indicator of a verified account. There is also the inclusion of Messaging Settings and an Updated Block feature for greater control. Messenger also has a Forward Indicator and Context Button to help prevent the spread of misinformation. The Forward Indicator lets someone know if a message they received was forwarded by the sender, while the Context Button provides more background on shared articles.
NEW: @instagram says it won’t ban anti-Muslim extremist Laura Loomer for calling on her 112,000 followers to “rise up” against Muslim congresswoman @IlhanMN, who Loomer falsely claimed wants “another 9/11.” https://t.co/YR8z4aKJvw
— Christopher Mathias (@letsgomathias) April 11, 2019
Dear @instagram, @facebook employees: you are morally right to strike or resign over this. https://t.co/SBYwUohDiX
— Justin Hendrix (@justinhendrix) April 12, 2019
Facebook discussions with the EU resulted in changes of its terms and services for users.
Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late?
Ahead of EU 2019 elections, Facebook expands its Ad Library to provide advertising transparency in all active ads.
- 10 key announcements from Microsoft Ignite 2019 you should know about
- Renovate joins WhiteSource to help developers spend less time on manually resolving dependency updates
- Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
*** This is a Security Bloggers Network syndicated blog from Security News – Packt Hub authored by Sugandha Lahoti. Read the original post at: https://hub.packtpub.com/will-facebook-enforce-its-updated-remove-reduce-and-inform-policy-to-curb-fake-news-and-manage-problematic-content/