Facts, Schmacts – Meta Joins X in Ceasing Content Moderation
On January 6, 2025, Meta, formerly known as Facebook, formally announced that it would cease its “fact-checking” operations, and allow the internet itself, through comments posted, to be the final arbiter of what is true and false.
Cool.
Meta’s refusal to fact-check user-generated content is rooted in a stated commitment to uphold free expression. Mark Zuckerberg, the company’s CEO, has argued that the platform should not act as an arbiter of truth, particularly in the context of political discourse. This approach ostensibly seeks to avoid accusations of bias and censorship, ensuring that users can freely express their opinions, even if those opinions are controversial or unpopular. Or simply “stupid” or “wrong.” The problem with this approach is that it concedes that truth or fact is simply a matter of whatever voices are the loudest, most active, or most interested. While Aldous Huxley famously said “Facts do not cease to exist because they are ignored”, apparently they cease to exist when the loudest trolls on the internet insist that they do.
There are such things as objective truths – scientific truths, factual truths, objectively true things. The Earth is round (well, pear-shaped), the Earth circles the Sun (well, in an elliptical and helical orbit), the Shoah/Holocaust happened, and 9/11 was NOT an inside job. Oh, and choosey kids choose Jif peanut butter (I prefer Skippy myself).
However, critics contend that this hands-off approach creates fertile ground for the proliferation of misinformation, hate speech and harmful content. For instance, the platform has been implicated in the spread of false claims about elections, public health crises and other sensitive issues, leading to real-world consequences. Meta’s reliance on third-party fact-checkers — independent organizations that review and flag potentially false information — has been criticized as insufficient, as the ultimate decision to moderate or remove content often rests on subjective judgments.
In January 2025, Meta announced sweeping changes to its content review policies, which included eliminating its partnerships with third-party fact-checkers. Instead, the company introduced “community notes,” a system resembling the user-generated context labels implemented by Elon Musk’s X (formerly Twitter). In his announcement, Zuckerberg stated that fact-checkers had become overly politically biased, undermining trust and exacerbating divisions. While the new policy aims to reduce censorship, Zuckerberg acknowledged that it represents a tradeoff, as it will likely lead to an increase in harmful content on the platform.
The Legal Landscape: Defamation and Section 230
Under Section 230 of the Communications Decency Act – a statute that the incoming President Trump has vowed to repeal, ISPs, carriers and social media companies are not “liable as publishers” for statements made by persons on their platform. This is true even if the platform takes steps to amplify, diminish, or censor content.
The government is all for mandatory censorship of content. Child pornography, obscenity, classified information, national security information, and the like are all banned online, and ISP’s and carriers are not only held liable for transmitting (knowing distribution) of these materials but also can be compelled to report such content to authorities. Companies from Google, Facebook, Verizon, Comcast and others spend millions creating content filters to keep some of this stuff out. Similarly, companies can be held liable for things like contributory copyright infringement if they fail to act to remove copyrighted content that flows through their system.
When it comes, however to misinformation, the legal landscape changes. I mean, who is to say WHAT is true? Gravity? Just a theory. Until you are pushed down a flight of stairs. Then it’s a theory with a bunch of broken bones. The JFK Assassination? Aliens.
The immunity granted by Section 230 has enabled platforms like Meta to host vast amounts of user-generated content without assuming the risks associated with traditional publishers. However, this immunity has increasingly come under fire from both sides of the political spectrum. Conservatives often argue that platforms censor right-leaning viewpoints, while liberals contend that companies like Meta do too little to address misinformation and harmful content. Without taking political sides here, when one side depends upon denying the very existence of viruses or virus-borne diseases, or insists that vaccines are the work of the devil, “truth” ends up taking a political bent.
Recent legislative and judicial developments signal potential changes to Section 230. Proposals to amend the law have focused on conditioning immunity on the implementation of certain content moderation practices, such as removing harmful or illegal content. Meta’s decision not to fact-check content could become a focal point in these debates, as lawmakers and courts grapple with whether such policies align with the original intent of Section 230.
Supreme Court’s Role: Paving the Path Forward
The Supreme Court has yet to directly address many pressing questions surrounding Section 230 and platform liability. However, recent rulings suggest that the Court may be willing to revisit the scope of immunity provided under the statute. For example, cases like Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh, which examined whether platforms could be held liable under anti-terrorism laws for content posted by users, have sparked intense debate about the boundaries of Section 230 immunity.
In Twitter, Inc. v. Taamneh (2023), the Court unanimously held that Twitter could not be held liable under the Justice Against Sponsors of Terrorism Act (JASTA) for a terrorist attack that occurred in Turkey. The Court emphasized the high bar for proving “knowing and substantial assistance” to wrongful acts, finding that Twitter’s general platform operations did not meet this standard. Similarly, in Gonzalez v. Google, the Court remanded the case to the Ninth Circuit, instructing it to reconsider the plaintiffs’ claims in light of the Taamneh decision. These rulings underscore the judiciary’s cautious approach to imposing liability on social media platforms for user-generated content but leave open the possibility of narrower liability frameworks in future cases.
Broader Implications: Free Speech, Misinformation and Corporate Responsibility
The debate over Meta’s fact-checking policy extends beyond legal considerations, touching on broader societal questions about free speech, misinformation, and the role of corporations in shaping public discourse. Meta’s decision to refrain from fact-checking aligns with a libertarian view of free speech, emphasizing minimal interference in user expression. However, critics argue that this approach overlooks the unique power dynamics of social media platforms, where algorithms can amplify certain voices and marginalize others. By allowing false or harmful content to proliferate, Meta’s policies may inadvertently stifle the speech of those who are harmed by misinformation or targeted harassment. Blocking revenge porn, nudified images, AI-generated fake representations is “censorship?” All viewpoints are not valid, and all postings are not permitted. The truth is that the truth is hard. And presenting it and stifling misinformation is difficult – and (here’s the kicker) – expensive. It’s also controversial.
Misinformation and Democratic Stability
The spread of misinformation on Meta’s platforms has been linked to numerous societal harms, from undermining trust in elections to fueling vaccine hesitancy. Critics argue that Meta’s refusal to engage in direct fact-checking exacerbates these issues, as false information can go viral before third-party fact-checkers have the opportunity to intervene. The democratic implications of misinformation are particularly concerning. When falsehoods are amplified on a massive scale, they can distort public discourse, erode trust in institutions and influence political outcomes. Meta’s decision to prioritize free expression over active content moderation thus carries significant risks for the health of democratic societies.
Navigating a Complex Future
Meta’s decision not to engage in direct fact-checking, coupled with its recent policy shift to eliminate third-party fact-checkers in favor of user-generated community notes, reflects a complex calculus of legal, ethical and practical considerations. While the policy aims to uphold free expression and reduce censorship, it also exposes the platform to criticism for enabling the spread of harmful content. The legal implications of this decision are profound, particularly in light of ongoing debates about defamation law, Section 230 immunity, and the evolving role of algorithms in content dissemination.
The truth is what we make of it. Let’s vote on whether that’s true or not.