Can Antivirus Companies Use ‘Good Samaritan’ Defense to Block Rival Software?

Is the Good Samaritan clause, intended to protect against harmful materials, too broad and ripe for abuse?

The essence of information security is to allow “good” things in and keep “bad” things out, and various tools assist us in doing that. Included in this list is anti-malware software, filtering software, anti-spam and anti-phishing software. We also use content-based blocking and filtering to prevent users from accessing malicious websites and websites known for hosting infringing content, pornography and worse. That’s all part of a comprehensive information security program.

But what if our tools were designed not only to keep bad actors and bad code out of our systems but also to prevent us from finding out about and/or installing programs that would better fit our security needs? What if Google prevented us from searching for information about Bing? What if our antivirus software treated other antivirus or anti-malware programs (with which they compete) like viruses and blocked their download and installation—not for the benefit of the customer or because the software was inherently incompatible, but because it was better for the developer’s business model? Essentially that makes the software self-perpetuating and defending itself. When does Skynet become active in the current timeline?

The question of whether an anti-malware provider can use its software to block competitors (or, more accurately, to block customers from using competitors’ products) is currently under consideration (well, reconsideration) by the federal appeals court in California. In September, the United States Court of Appeals for the Ninth Circuit ruled that the immunity granted under the law that permits the blocking of content does not apply when the purpose (or effect?) of the content blocking is to be anti-competitive and in restraint of trade.

Here’s a bit of background.

Section 230, Good Samaritan Clause

In 1995, in a case arising out of the “Wolf of Wall Street” company Stratton Oakmont, representatives of the company sued the online service provider Prodigy for the content of messages posted by third parties on their financial message boards, which Stratton Oakmont considered to be defamatory. The question for the court was whether the ISP that hosted the board could be held liable for the content posted by others when the bulletin board actively curated content. The New York court said that Prodigy was, in fact a “publisher” of the third-party content in that it controlled what could be posted, had the ability to take it down and was responsible for disseminating the offending content to the world—or whatever small subset of “the world” was on Prodigy at the time.

Needless to say, this ruling was perceived to be inconsistent with the nature of the then recently opened World Wide Web. Congress passed a law, the Communications Decency Act, Section 230 of which provided immunity to internet providers, websites, software developers and ultimately social media hosting companies for both the content of what third parties post on their servers and provides protection for entities that block access to “material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable …” This was what Congress then called the “Good Samaritan” clause of the CDA. Taken in context of the Communications Decency Act, this provision was intended primarily to allow for blocking of pornography or materials deemed harmful to minors, but the language is broad enough to provide immunity to any provider (hardware, software, access, etc.) to block access to anything that it believes (for whatever reason) is “otherwise objectionable.”

Maybe.

Send in the PUPs!

Enigma and Malwarebytes were providers of anti-malware software that helped internet users to filter unwanted content from their computers. In a lawsuit, Enigma alleged that Malwarebytes configured its software to block users from accessing Enigma’s software to divert Enigma’s customers.

As the court described the issue:

Malwarebytes programs its software to search for what it calls Potentially Unwanted Programs (“PUPs”)… In late 2016, … Malwarebytes revised its PUP-detection criteria to include any program that, according to Malwarebytes, users did not seem to like. After the revision, Malwarebytes’s software immediately began flagging Enigma’s most popular programs—RegHunter and SpyHunter—as PUPs. Thereafter, anytime a user with Malwarebytes’s software tried to download those Enigma programs, the user was alerted of a security risk and, according to Enigma’s complaint, the download was prohibited, i.e. Malwarebytes “quarantined” the programs.

Of course, it is a factual question whether Malwarebytes blocked the download and installation of Enigmas code because it was causing interference with its users’ computers or simply because the product interfered with Malwarebytes’ business model.

I Object!

So Section 230 allows entities—including software and malware blocking providers—immunity from most lawsuits for blocking pornography, violence and “other objectional” materials. The questions are, Who decides what materials are “objectionable” and can that decision be made based on any criteria the software developer wants? In other words, if Google objects to your searching for information about Microsoft, can it use Section 230 to prevent a lawsuit in Redmond, Washington, for things such as anti-trust violations, restraint of trade or tortious interference? or can Google simply say, “We find Microsoft objectionable,” and block it?

The Ninth Circuit held that the term “otherwise objectionable content” in the Good Samaritan clause was, based on the history of the statute and its purpose, intended to permit entities to block things that users would find objectionable, not things that the filterer finds objectionable for its own business purposes. The court noted:

“Immunity for filtering practices aimed at suppressing competition, rather than protecting internet users, would lessen user control over what information they receive, contrary to Congress’s stated policy. .. Indeed, users selecting a security software provider must trust that the provider will block material consistent with that user’s desires. Users would not reasonably anticipate providers blocking valuable online content in order to stifle competition.”

So is this GOOD for computer security or BAD?

As my legal colleague Eric Goldman from Santa Clara Law School points out in a brief filed in the case:

“The … decision will foster spurious legal accusations of anti-competitive blocking of software programs that are, in fact, dangerous to businesses and consumers. These legal threats will hinder the ability of anti-threat software vendors to properly classify threats to businesses and consumers, which will make the Internet less safe for everyone.”

Vendors of all kinds of filtering products—those which are intended to let “good” things in and keep “bad” things out—will have to worry about whether their blocking of spam, malware, phishing, etc., will get them sued as a result of the Good Samaritan clause, and as a result will block and filter less content, including less malicious content. Every entity that has content blocked, every website that finds itself on a blacklist, every user who wants their Facebook page seen will simply allege that the filterer—the blocking software, Facebook, Twitter, etc.—is acting in a manner that is “in restraint of trade.” Every attempt to keep “bad” things out will be not only second-guessed, but also will result in harmful and expensive litigation. Rather than face lawsuits, filtering entities will either be reticent to block or just not block at all. Not good for information security.

This “all or nothing” analysis might represent good policy, but in my opinion is not consistent with the structure, language and purpose of the Good Samaritan clause. The statute encourages blocking of pornography, violence or “other objectionable content,” not “any other content.” If immunity for blocking any content were the intent of Congress, then there would be no need to mention porn or violence. Also, the statute is unclear about to whom the materials must be objectionable and whether that must be reasonable. When you block content (conduct for which you have immunity) you have to do so semi-responsibly, and in a way that does not in and of itself violate the law. The immunity is broad, but it’s not unlimited. It’s intended to be a shield against claims of misconduct and censorship, not a sword to violate other laws.

But that still creates a problem for anyone filtering content. The risk is not that you will do something prohibited by law, but that you will do something permitted by law, but that this embroils you in litigation where you are forced to prove that what you did was proper. A blanket and broad immunity without any exceptions to block or filter any content would result in immediate dismissal of any lawsuits against filtering or blocking entities. If that’s what Congress wanted or intended with the Good Samaritan clause, it can clarify. Perhaps Congress should. Perhaps not. That’s for another article.

Mark Rasch

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 203 posts and counting.See all posts by mark