The Internet’s Future at Stake (Really!) as Supreme Court Takes Up Provider Immunity

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Those 26 words helped create the modern internet, for better or worse. They provide almost limitless immunity for platforms like Google, Facebook, Twitter and others to disseminate information free from liability for the content of that content. Mostly. In fact, these platforms might not exist but for those 26 words.

On February 21, 2023, in a pair of cases, Twitter v. Taamneh, and Gonzalez v. Google, the U.S. Supreme Court heard oral arguments about the cases that could determine the scope and extent of the immunity afforded by Congress in passing what is called “Section 230” of the Communications Decency Act, and how a balance should be struck between the interests of those injured by harmful internet content and the platforms that disseminate (and often amplify) that harmful content.

Death by Publication

The terrorist organization known in the United States as ISIS engaged in, took credit for or applauded a series of attacks in Paris, France, Istanbul, Turkey and in San Bernardino, California, in which multiple people died. These victims’ families sought relief under a law that provided civil remedies to recover damages for injuries suffered “by reason of an act of international terrorism.” The families did not sue ISIS but sued Twitter, Facebook and Google—platforms that the victims’ families alleged distributed, published, amplified and profited from postings by ISIS and IISIS supporters and advocates. The families also argued that the algorithms used by these platforms—which used viewers’ personal information to recommend new content related to their interests—served to highlight and disseminate ISIS-related content (from which the platforms made a profit), which they alleged provided material support to the terrorist organization and that the platforms should be held liable. The theories of liability and acts of the three platforms and the allegations of the multiple plaintiffs are all somewhat different, but in essence, the victims’ families alleged that the platforms profited from amplifying the terrorists’ message and that they should be liable for this action.

But then, there are those 26 words.

In 1990, two competing news aggregation and comment services, “Skuttlebut” and “Rumorville” took to the newly commercialized internet. When the operations of the latter posted what the former considered to be defamatory information about them on the message board service Compuserve, Skuttlebut’s operators sued. But they did not sue Rumorville—they sued the platform on which Rumorville posted the alleged defamatory materials—Compuserve. Skuttlebut alleged that the platform published, distributed and amplified the defamation and that they should be held liable. The platform, Compuserve, argued that even if the statements were false and defamatory, they simply “acted as a distributor, and not a publisher, of the statements, and cannot be held liable for the statements because it did not know and had no reason to know of the statements.” The New York court agreed and granted Compuserve summary judgment.

Four years later, an unnamed poster to an online Prodigy message board called “Money Talks” posted a message about the now-infamous “Wolf of Wall Street” company, Stratton Oakmont Investments, and its then-CEO Daniel Porush (played by Jonah Hill in the movie). The message, on a moderated bulletin board, called one Stratton Oakmont offering a “major criminal fraud” and “100% criminal fraud” and noted that Porush was “soon to be proven criminal;” and that Stratton Oakmont was a “cult of brokers who either lie for a living or get fired.”

In that case, the court found Prodigy liable for defamation as a publisher, citing a number of facts including: That Prodigy promulgated “content guidelines,” asking that posters refrain from posting notes that are “insulting” and advised that “notes that harass other members or are deemed to be in bad taste or grossly repugnant to community standards, or are deemed harmful to maintaining a harmonious online community,” and reserved the ability to remove offending materials. The court also noted that content was screened by a software screening program that automatically prescreened all bulletin board postings for offensive language and the fact that the boards were moderated by actual humans, and that those humans also had the ability to delete content. The court found the platform liable as a publisher of the Wolf of Wall Street comments.

In the Compuserve case, the platform was alleged to be liable for the actions of another—of the Rumorville defendants. In the Prodigy case, they were being held liable for their own actions—for improperly exercising judgment over what they decided to post and not post online. Or at least that’s what the court found.

In response to the dual Compuserve/Prodigy cases, Congress did something it rarely does. It did something. Congress passed Section 230 of the Communications Decency Act. The statute has repeatedly been held to provide a shield against liability by online platforms not only for the acts of third parties (what they post) but also for the acts of the platform in deciding what to do with what they post. While not universal, the courts have been very deferential to Section 230 immunity.

That may change with this latest Supreme Court ruling. My colleague, Eric Goldman, maintains a blog that has collected virtually every Section 230 case and noted that he “expect[s] the arguments will go poorly for free speech and the internet’s status quo.” The debate has clearly been politicized (D versus R), with one side thinking that “Big Tech” has been deliberately “censoring” their speech and calling for the removal of immunity and compelling platforms to publish (in fact, taking the side that Twitter would be required to publish ISIS’ screeds), and the other side upholding the immunity and being upset at “Big Tech” for not doing enough to protect society from what they deem to be “misinformation.”

Expect a lively debate before the court.

So here’s the problem. If we hold platforms liable for the content of others, we impose a duty on them to read every single posting, tweet, comment, like, etc., and determine the truth or falsity of that comment and whether that comment is posted with or without malicious intent and whether it is likely to cause harm. Undoubtedly, these platforms will have to develop “misinformation” algorithms to limit what users can post online—and will then have potential liability for not permitting access to the platform (not “censorship” as that term should properly be applied to government actions). So, making platforms liable for third-party content forces platforms to filter third-party content—in effect, to exercise strong editorial control.

On the other hand, if we tell platforms they have no liability for third-party content, then we make it difficult for victims of malicious online behavior to respond. We make it hard to take down defamatory or harmful materials. Platforms have no liability for revenge porn posted to their sites, deep fakes, misinformation and organized fraud schemes. Nigerian princes and terrorists. Human sacrifice! Dogs and cats living together! Mass hysteria!

A middle position would be for providers to have no liability for the actions of third parties (those who post, tweet or comment) but have liability for their own actions. Thus, an operator of a website that encourages people to defame others or that deliberately creates an algorithm designed to amplify hate speech may have liability for doing so. However, this brings these platforms closer to the Stratton Oakmont case and further from Compuserve.

Don’t Wait for the Court

At the same time, both Congress and various states like Texas and Florida are passing laws limiting the ability of platforms to remove offending content, while states like California and New York are taking the opposite approach and mandating removal of offending content.

In another approach to content moderation, states have attempted to require platforms to be more open about their takedown policies. California, Georgia and Ohio proposals would mandate that these platforms not only publicize their terms of service but provide regular reports on what they have done in response to violations—including a requirement that they release confidential internal information about their takedown policies and procedures.

It seems that everyone has a “love/hate” relationship with Big Tech. The high court may affirm, deny or modify the immunity currently enjoyed by the platforms to carry content of third parties. Whatever they do, it will be controversial. And it may break the internet as we know it.

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 219 posts and counting.See all posts by mark