Are Internet Providers ‘Aiding and Abetting’ Crimes?

The internet was on tenterhooks over the question of whether the U.S. Supreme Court would find that online providers like Google, Facebook and others could continue to enjoy protection under the Communications Decency Act Section 230 for the statements and actions of users of their site. In particular, the Supreme Court was presented with an appeal from the United States Court of Appeals for the Ninth Circuit addressing the liability of Google’s YouTube service for amplifying the messages of ISIS or other terrorist organizations. Families of victims of terrorist attacks sued Google, alleging that YouTube “aided and abetted” the terrorist acts because the online video-sharing service both provided the terrorist organization an outlet to recruit and coordinate their activities and that its algorithm—designed to maximize viewer engagement and advertising profits—aided and abetted the radicalization of members and recruits to the terrorist organization.

The high court was faced with two questions: First, whether the findings of the Ninth Circuit Court of Appeals—that the federal anti-terrorism statute which provided the families of the victims with a civil cause of action for those who “aided and abetted” terrorism—could be used against entities like YouTube, Google and Twitter and second, whether Section 230 of the Communications Decency Act provided these entities with immunity from civil liability. The second of these was more significant, as courts and policymakers have struggled to reach the appropriate balance between the need for online accountability and compelling censorship by the tech giants.

On May 18, 2023, the U.S. Supreme Court dodged the 230 question and ruled only on the scope of the “aiding and abetting” law—finding that neither Google through its YouTube service nor Twitter with its online messaging, “aided and abetted” the terrorist acts—reversing the findings of the appellate court. Since there was no liability, it was not necessary for the court to determine in this case whether there was immunity.

Death by Publication

In 2015, ISIS terrorists unleashed a set of coordinated attacks across Paris, France, killing 130 victims, including Nohemi Gonzalez, a 23-year-old U. S. citizen. Gonzalez’s parents and brothers sued Google, LLC, under 18 U. S. C. §§2333(a) and (d)(2), alleging that Google was both directly and secondarily liable for the terrorist attack that killed Gonzalez. The statute provides civil remedies to recover damages for injuries suffered “by reason of an act of international terrorism,” The families did not sue ISIS, but sued Twitter, Facebook and Google—platforms that the victims’ families alleged distributed, published, amplified and profited from postings by ISIS and ISIS supporters and advocates. The families also argued that the algorithms used by these platforms—which used viewer’s personal information to recommend new content related to their interests—served to highlight and disseminate ISIS-related content, from which the platforms made a profit, which the families alleged provided material support to the terrorist organization and for which the platforms should be held liable. The theories of liability and acts of the three platforms and the allegations of the multiple plaintiffs are all somewhat different but, in essence, the victims alleged that the platforms profited from amplifying the terrorists’ message and that the YouTube algorithm enhanced the effectiveness of the ISIS message. They asserted that the amplification caused or aided in the terrorist attacks and that YouTube/Google should be liable for this action.

In 2016, Congress enacted the Justice Against Sponsors of Terrorism Act (JASTA) to allow families of terrorism victims to sue anyone “who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.” The plaintiffs alleged that YouTube and Twitter aided and abetted the ISIS attacks.

The court disagreed. Applying a standard from a 1983 D.C. case, the Supreme Court found that, at best, the tech giants failed to adequately prevent the dissemination of ISIS videos and tweets. The court noted:

“Plaintiffs assert that defendants’ ‘recommendation’ algorithms go beyond passive aid and constitute active, substantial assistance. We disagree. The algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”

The Court noted that what the terrorist victims were really saying was that Twitter and Google failed to prevent the attacks. The Supreme Court noted “[t]o show that defendants’ failure to stop ISIS from using these platforms is somehow culpable with respect to the Reina attack, a strong showing of assistance and scienter would thus be required. Plaintiffs have not made that showing.”

In short, to “aid and abet” a crime, you must have some knowledge that a crime will be committed and some general desire to assist in its commission. The specifics may be determined on a case-by-case basis, but the high court provided a framework for making this decision in the future.

Why it Matters

The court ducked the more sensitive issue of internet content regulation. Tech companies would much rather have immunity under Section 230 than a defense under the anti-terrorism statute since the former allows them to have the case dismissed outright without having to mount a defense (and pay for it.) But many things internet security companies or victims of internet crime do may have the practical effect of providing assistance to those who commit crimes. Sharing vulnerability and incident data could be aiding and abetting. Paying a ransom could facilitate future crime. Providing security consulting services might aid and abet the use of the infrastructure for unlawful activities.

What the court did was to set some guideposts on the ability of the government or private litigators to go after internet companies that might be used to unwittingly further the objectives of some “bad guy” seeking to do bad things. In doing so, it reduced the potential liability of these entities for the bad acts of unaffiliated third parties. What it didn’t do is address the core issue—should tech giants be policing their customers at all? That’s for a future case.

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 203 posts and counting.See all posts by mark