Is Apple’s Client-Side Child Porn Scanning Legal?

Apple recently announced that it will start scanning devices individuals have purchased from Apple for both images that constitute child sexual abuse materials (CSAM) and for text messages or communications that are “inappropriate for minors.” Thus, the drives and folders of tens of millions of innocent adults and children will be examined by the Cupertino company, and the results (both positive and negative) of these searches will be provided to the National Center for Missing and Exploited Children (NCMEC), and then to law enforcement. Remember, a search that doesn’t find anything is still a search; your iPhone will be searched by Apple.

A previous post addressed the question of whether this “client-side” scanning was a good idea, and the impact it might have on privacy and security. This post will address the question of whether or not it is legal. The short answer is—with all deference to Apple’s teams of high-powered lawyers—probably not.

What is Apple Doing That is Different?

Currently, various ISPs, cloud providers, social media providers and similar providers (collectively, “interactive computer service providers,” or ICSPs) perform scans of files that are uploaded to their servers. In fact, Apple does this on data that is uploaded to its iCloud platform and may do this with respect to messages sent through its iMessage server or its FaceTime videoconferencing feature.

What is different is that Apple proposes to do “client-side” unencrypted scanning. What this means is that Apple will be scanning data on devices, not data on their servers. As noted in the previous post, this means that Apple will have to insert itself into the device at a point where the data is either not yet encrypted or where it has already been decrypted—essentially representing the users’ credentials to view the users’ data. While the technology associated with scanning encrypted communications and encrypted data stored on a cloud platform is similar, in computer crime law (as in real estate) location matters.

Public vs. Private Search

Many have objected to Apple’s proposed scanning as being an “unreasonable search and seizure,” or an “unconstitutional search” or as violating the Fourth Amendment, or some such. For the most part, this analysis is wrong. Apple is a private company. In United States v. Jacobsen, a FedEx employee accidentally opened a package which contained a white powder. They called the cops, who then tested the drug, found it to be cocaine and let the package continue on, arresting the recipient. In Walter v. United States, boxes of filmstrips “depicting homosexual activities” were accidentially misdelivered, the unintended recipient unsuccessfully attempted to view the films and called the police. The police examined the films without a warrant, allowed them to be transmitted to the “correct” recipient who was arrested for trafficking in obscene materials. In the case of the drugs, the court held that the “search” by FedEx was a “private search” not conducted on behalf of the government and not by a “state actor” and that the subsequent warrantless government search (the test for cocaine) revealed nothing more than what was revealed by the private search. In Walter, the court came to the opposite conclusion—that the FBI’s viewing of the films was, in fact, a government “search” of the contents of the box for which a warrant was required.

Nevertheless, courts have held that private searches for CSAM by ISPs, cloud Providers and others are purely private searches, and, as such, when the private companies turn the results over to some government agency, this is not a governmental search for which a warrant is required. If your neighbor breaks into your house and finds a pot plant on your kitchen table, takes it and calls the cops, the cops can prosecute you for the pot (it’s still illegal federally) and use the evidence found as a result of the private search, even if the neighbor violated the law to get the evidence. An exception exists if the private searcher is acting directly or indirectly as a “government agent”—either because the police asked them to conduct the search (“Go into the neighbor’s house and take the pot plant.”), or if they encourage the search (“I ain’t sayin’ you gots to soych, but if you happen ta find yahself in their house …”).

But searches—especially searches by ISPs, social media sites and companies like Apple—for CSAM are not really “private.” Currently, many ICSPs voluntarily scan files in users’ online accounts. Federal law then requires such ICSPs to report any CSAM they detect to the NCMEC, which then reviews the files and, if they are suspected to be CSAM, reports the finding(s) to law enforcement. In turn, the ICSP is granted immunity from liability as a “publisher” of the CSAM. But the government often goes beyond the mere liability incentive to “encourage” ICSPs to engage in searches of their customers’ data and turn the result over to law enforcement. In some cases, the FBI paid employees of companies like Best Buy for “voluntarily” reporting CSAM to them. In one case, Judge (now Justice) Neil Gorsuch found that, while AOL was not a “government agent” when it searched for CSAM, NCMEC—the agency designated by Congress to report CSAM to law enforcement—is a government “agent” (not a government agency), but a later court held that evidence searched by NCMEC without a warrant could be used where it was examined by NCMEC in “good faith.”

Congress knows that if ICSPs (including Apple) were legally compelled to report CSAM to law enforcement, they would become “state actors” and the Fourth Amendment might apply to their actions. That’s why ICSPs are “mandatory reporters” not to law enforcement, but to NCMEC, which is, of course, a mandatory reporter to law enforcement. Moreover, Congress is moving even further with proposed laws like the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act) which would require ICSPs to implement “best practices” to prevent the proliferation of CSAM (read that as scanning and reporting), or they would be criminally and/or civilly liable for such dissemination. In other words, scan your customers’ files for child porn and report it, or you go to jail. I’m not sure that a search by a company like Apple, under those circumstances, could still be legally justified as a purely “private” search.

But for now, the state of the law is that the First Circuit, Fourth Circuit and Eighth Circuit have found that systematic, programmatic and automated searches of innocent and guilty persons’ files by ICSPs, and the automated delivery of the results of such searches to law enforcement under a statutory scheme that encourages but does not require such scanning, does not make the ICSP into a “state actor.” Thus, if Apple searches your hard drive, it’s not acting as a cop, and the Constitution, the Fourth Amendment and everything you learned on Law & Order: Special Victims Unit (dun dun) does not apply.

Crime, Boy. I Don’t Know.

But just because the Apple search of your iPhone is not a “governmental” search does not mean that it is “legal.” In fact, it may very well be a crime—by Apple.

The principal federal laws are the Computer Fraud and Abuse Act (CFAA), the Wiretap Law and the Stored Communications Act (SCA). They prohibit, respectively, computer “trespass” (breaking in); “interception” of communications and unauthorized access to “stored” electronic communications. Finally, there are federal laws prohibiting the “intentional” transmission and storage of CSAM.

It looks like Apple’s proposal pretty much violates each of these laws.

Forgive Me My Trespasses

The CFAA prohibits knowingly accessing a computer without authorization or exceeding the scope of authorization to access a computer in order to “obtain information” from that computer. If I were to hack into your computer—even if my intent was to look for and find CSAM—while the police could still use what I found since I am not a “state actor,” I could still be prosecuted for accessing the phone (a computer) without authorization or in excess of my authorization.

In the case of client-side scanning, nothing gives Apple the “right” to access my “phone”—at least not for the purposes they are accessing it. Sure, I install their software on my phone. Sure, I grant them permission for their software to “access” my phone—it wouldn’t be a phone without that. But I don’t give Apple permission to access my phone, scan the contents of the phone and report what they find to anyone. They will have intentionally exceeded the scope of their authorization to “access” my phone.

But, didn’t I consent to this? I mean, they scan my email, my chats and files uploaded to iCloud. Why is this different?

It’s different because, in most of these other cases, Apple is not accessing my device. They are accessing my data on their device. They access my iCloud account on their server. My emails on their server. From a privacy standpoint, it makes no difference. But from an “unauthorized access” standpoint, it’s a different kettle of fish. But didn’t I consent to let Apple monitor when I either agreed to their privacy policy or accepted their EULA? Nope.

I Didn’t Consent To This

When you buy an Apple device—whether it’s an iPhone, iPad, Apple Watch or Apple pen (and, possibly, even things like Apple earphones, charging cables, etc.) which has software embedded, you agree to the ever-changing terms of Apple’s software licensing agreement. As Apple itself notes:

“Your use of Apple software or hardware products is based on the software license and other terms and conditions in effect for the product at the time of purchase. Your agreement to these terms is required to install or use the product. Please be aware that the software license that accompanies the product at the time of purchase may differ from the version of the license you can review here. Be certain to read the applicable terms carefully before you install the software or use the product.”

If you look at the software license agreement for the most recent version of the Apple mobile operating system, iOS 14, it does not appear to give Apple the right to monitor the contents of a user’s iPhone as a condition precedent to using the device, but it does indicate that Apple’s collection of data is covered by its privacy policy.

The most recent version of the privacy policy indicates that, among other things, Apple may collect “Data about your activity on and use of our offerings, such as app launches within our services, including browsing history; search history; product interaction; crash data, performance and other diagnostic data; and other usage data.” So, under one interpretation of this language, any use of an Apple device—including wholly offline use like taking a picture of your family for storage on the device—constitutes “data about your activity on” the Apple device. But that’s a slender reed on which to hang your hat. There’s a world of difference between gathering “crash reports” on software and reading people’s text messages or scanning their files.

On the other hand, Apple indicates that it may use data it collects to, among other things, “protect individuals, employees, and Apple” and for “prescreening or scanning uploaded content for potentially illegal content, including child sexual exploitation material.” (emphasis added), as well as “[t]o comply with applicable law—for example, to satisfy tax or reporting obligations, or to comply with a lawful governmental request.” Note that Apple doesn’t say to comply with their tax or reporting obligations—they may turn over your data to the IRS or EPA or DOT to comply with your reporting obligations.

Again, it’s a matter of reasonable interpretation. The “consent” or “authority” of Apple to collect data to “protect Apple” could be used to justify the collection of attorney-client privileged communications in a lawsuit against Apple, or to scan the iPhones of people to see if they have been exposed to COVID, or have been vaccinated—without their knowledge or consent. “Hacking” into people’s computers or phones just ’cause you want to “protect” a corporation is, well, hacking. If the company relies on “consent” as justification for the access to consumers’ files and data, it had better be much more explicit. In some ways, this is no different than the infamous “spyware” EULA’s of the early 1990s; software agreements that permit the developer to do things wholly unrelated to the software itself.

Even the “scanning for CSAM” language in the privacy policy does not either justify or authorize what Apple proposes—to scan users’ hardware for unlawful materials. In fact, by “reserving the right” (or more accurately, informing users that it intends) to scan “uploaded content” for potentially illegal content, it can probably be argued that the user has not consented to, or authorized Apple to scan data that is not “uploaded content.” Put simply, users have not consented to the scanning of their devices, and therefore any “access” by Apple for that purpose would constitute “exceeding authorization” to access—even under the liberal interpretation of that phrase most recently by the U.S. Supreme Court. 

Put simply, while Apple has the right to protect itself and its interests and, in some cases, to scan uploaded documents for CSAM, the argument that the current EULA or privacy policies permit Apple to scan individuals hard drives and iPhones for evidence of illegal conduct (including CSAM) is, at best, a stretch.

I Am Altering The Deal—Pray I Don’t Alter it Further

No problem, though. All Apple has to do is delete the word “uploaded” from the next version of its privacy policy. One problem with online contracts and contracts which relate to an ongoing relationship (e.g., the use of a product that contains software) is that some courts maintain that the continued use of the product (software) after the provider changes the terms of the agreement constitutes consent to the change. Besides, nobody actually reads these terms anyway.

But the EULA and the privacy policy both expressly note that they apply to products and services purchased or used pursuant to the then-existing policy. Merely dropping the word “uploaded” from the privacy policy may not be sufficient to act as express consent to searching the iPhone you bought in 2007.

Also, it’s important to note that Apple claims to have the right not only to scan for CSAM but for any “potentially illegal content.” That may include things like viruses or malware (which is not illegal to possess or develop but is illegal to unlawfully deploy or use), but it can also include evidence of any crime. Again, one could read the contract to permit Apple to scan only for items that are themselves contraband—copyright infringing materials, trademark infringing materials, CSAM, obscenity, and other “digital contraband.”

Under the current privacy policy and EULA, Apple likely has no express or implied right to “access” my phone for the purposes they propose. Again, arguing that a user, simply by buying an iPhone or having software on that iPhone has “consented” to have these files examined by Apple while in situ. While Apple might be able to argue that the language constitutes “clear and unambiguous consent to access users’ hard drives,” a court might disagree.

Apple might also argue that they are not accessing people’s iPhones, iPads and other devices. They may argue that Apple’s software is accessing the data, or, similarly, that they are only accessing their own software and not the users “computer” (phone). The first argument is addressed by reference to the federal “aiding and abetting” law which makes it a crime to “cause” someone or something to do something which would be an offense. So if you can’t read someone’s files, then you can’t cause a program to do it. As to the question of whether Apple is accessing your computer without authorization under CFAA or just somehow accessing their software with their authorization obfuscates the point that what Apple is really doing is reading your files.

Null Set

It’s important to note that what Apple plans to do is to scan the entire contents of every users’ drive for CSAM or materials that are “inappropriate for minors.” This raises a difficult question about what constitutes a “search” or an actionable search under the Fourth Amendment (which, again, does not apply to non-government searches.) When Apple’s scanning software scans my iPhone and finds no CSAM and reports back to Apple, “These aren’t the droids you’re looking for,”—or, more accurately, reports nothing back to Apple—and Apple reports nothing to NCMEC, which in turn reports nothing to the FBI, have I been “harmed” as a result of the search? I mean, the government (and Apple) learned nothing about me except that I have no child porn on my device.

The problem with automated scanning is that tens of billions of “innocent” messages of hundreds of millions of users are scanned with only a tiny fraction of those scanned returning results. It’s similar to what the U.S. Supreme Court found in a case involving drug-sniffing dogs investigating stopped cars where the court, quoting the FedEx drug box case, held that “any interest in possessing contraband cannot be deemed “legitimate,” and thus, governmental conduct that only reveals the possession of contraband “compromises no legitimate privacy interest.” This seems to suggest the court’s current thinking that searches—and even “automated” searches (whether by robot or Rover) which only reveal the presence or absence of contraband are acceptable whether it is packets of drugs that are being searched or digital packets that are being searched. But again, the issue under the CFAA is not whether this is a “search” but rather whether Apple is authorized to access its customers’ hard drives at all. And if they are not authorized, then their access is a crime.

Man In The Middle

Remember that Apple proposes to not only scan the contents of hard drives, but also to scan messages and communications that the user thinks (because Apple has led them to think) are encrypted, personal and secure. In other words, that the user has a “reasonable expectation of privacy.”

In general, the wiretap law prohibits the “interception” of the contents of communications in transmission and further prohibits the “disclosure” of the contents of unlawfully intercepted communications, as well. That’s critical, because if Apple is acting unlawfully in intercepting the contents of these messages, then they can’t give them to NCMEC or to the police—even though Apple is acting as a private citizen.

There are two salient exceptions to the wiretap law (the court order exception doesn’t apply to what Apple proposes to do). One is called the “provider” exception. This notes that “It shall not be unlawful [for] a provider of wire or electronic communication service, whose facilities are used in the transmission of a wire or electronic communication, to intercept, disclose, or use that communication in the normal course of his employment while engaged in any activity which is a necessary incident to the rendition of his service or to the protection of the rights or property of the provider of that service…” Apple likely is a “provider of communications,” at least with respect to FaceTime and iMessage, although an argument can be made that Apple is simply the provider of software which enables messages to travel across the internet and the public switch network. But is the scanning of these messages by Apple done “in the ordinary course of employment” and is it “necessarily incidental” to providing services or “protecting the rights or property” of Apple? Again, it’s a stretch. While AT&T might have the right to listen in on a phone call to make sure you aren’t using a “blue box” to steal phone service (kids, ask your parents—on second thought, ask your grandparents), Apple is under no obligation to scan for CSAM any more than it is obligated to scan for vaccine misinformation, conspiracy theories or holocaust denial. Having a legitimate business interest in wiretapping is not the same as doing so in a way that is “necessarily incident” to providing services. Protecting a business reputation (by providing communications that are “safe” for minors) is not the same as protecting the “rights and property” of the provider. So the “provider” exception is probably not going to cut it for Apple.

A second exception to the wiretap law is that one party to the communication has “consented” to the interception. That’s why you get those recordings that say, “This call may be monitored for quality assurance purposes … your call is very important to us …” Apple could argue that its privacy policy, which expressly permits “prescreening or scanning uploaded content for potentially illegal content, including child sexual exploitation material,” constitutes consent to interception and scanning of iMessages and FaceTime calls. But it doesn’t. While internet-based messages travel across the web, they aren’t “uploaded” in any meaningful sense. They are “transmitted.” This is a distinction that should make a difference. If Apple wants to be able to act as a “man in the middle” of your text messages and FaceTime calls and relies on the “fact” that you have consented to it, then I would expect a more explicit warning. Something along the lines of “Warning—Lark’s Vomit.

A final problem with the idea that Apple can “intercept” your texts and VoIP calls under the consent exception is that the party with whom you are communicating may not themselves have consented to the interception. That’s important, because states like New Hampshire, New Jersey, Maryland, Washington, California, Florida, Connecticut, Montana and Illinois (sort of) require that all parties to an electronic communication consent to having their communications intercepted by Apple. So if Apple “intercepts” a communication between an Apple user and a non-Apple user, the terms of service or Apple Privacy Policy does not act as consent to intercept with respect to the non-Apple user—if they live in one of these states. So, Apple’s scanning of communications for CSAM may constitute a prohibited interception of electronic communications.

Stored Communications

Another federal criminal statute, the Stored Communications Act, makes it a crime to “intentionally access without authorization a facility through which an electronic communication service is provided; or intentionally exceed an authorization to access that facility; and thereby obtain, … a wire or electronic communication while it is in electronic storage.” The statute exempts from coverage “conduct authorized by the person or entity providing a wire or electronic communications service, [or conduct authorized] by a user of that service with respect to a communication of or intended for that use…” As noted above, Apple is not really the “provider” of the iMessage or FaceTime communications service—it is the licensor of the software that enables the service; a court would likely find that Apple would be permitted to access messages stored on its own servers for CSAM under this statute. But this covers a lot. Since Apple reserves for itself the right to monitor communications not just for CSAM, but also for materials that are “inappropriate for minors” (even if these are between adults), Apple, theoretically, could scan an iMessage for a link to a comedy routine that the Supreme Court has ruled inappropriate for minors and, therefore, can be banned from the airwaves. Apple also can scan your messages to protect Apple—whatever that means.

Intentional Transportation of Child Pornography

Apple also announced that it will not just scan people’s hard drives and phones for CSAM, but that “each instance [of suspected CSAM] will be manually reviewed by the company before an account is shut down and authorities are alerted.” Not a good idea. From a legal perspective.

What Apple is saying is that they will scan a user’s iPhone with some whiz-bang AI program that will alert them to a mathematical (almost) certainty that a specific file is something that is absolutely prohibited to possess. They will then use the customer’s own transport medium (their Wi-Fi or cellular network) without their knowledge or permission to “transmit” that CSAM to Apple, which will then take possession of that contraband.

In other words, Apple employees will be knowingly transmitting, storing and possessing child pornography. Now, this is different than scanning files that are already on an Apple server or an iCloud account. Apple will have possessed the file without knowing that it is CSAM, then scanned it, determined that it is CSAM and then transmitted it (in a way allowed by law) to NCMEC. But what Apple proposes to do is to identify CSAM on someone else’s computer, and then—knowing that it is CSAM—upload it to themselves.

The federal child pornography law makes it a crime to “knowingly transport … [CSAM] using any means or facility of interstate or foreign commerce or in or affecting interstate or foreign commerce by any means including by computer.” It also makes it a crime to knowingly distribute or reproduce such CSAM or to possess it “with intent to view” it. While the law creates an “affirmative defense” (that is, you get prosecuted, but you get to offer evidence to defend yourself) that you either promptly deleted the CSAM or that you promptly reported the CSAM to law enforcement, there are a few problems with Apple employees (or the company) asserting this as a defense. First, it’s not clear that Apple intends to notify law enforcement (as opposed to NCMEC, which is authorized to receive this data) when they find suspected CSAM. Second, the “affirmative defense” only applies if there are fewer than three images of CSAM.

It would be hard enough to have a job at Apple that required you to scan through CSAM all day. But that job, it appears, comes with the potential for felony indictment. Cool beans.

Copywrongs

While obscene materials and CSAM likely cannot be copyrighted, Apple proposes to “steal” files from its users’ hard drives and make a copy of them for its own use. To the extent that these files are not either obscene or CSAM, the creator of the image would likely hold the copyright to the work and could assert that Apple’s conduct constitutes an infringement of that work. In addition, the owner of a copyrighted work (like all of the photos on the device) has the right to “restrict access” to that copyrighted work. Certainly, storing your photos on an encrypted iPhone would be a restriction of access to the work. What Apple proposes to do is to obtain access to tens of billions of copyrighted works stored by their creators on hard drives, and not shared with Apple. Even if the copyrighted works are not copied by Apple, the access to them is not authorized and may, in fact, be infringing.

Obviously, Apple’s teams of lawyers thought all of this out before they considered allowing the company to break into the file cabinets of every customer, rummage through every document, picture or file on those customers’ devices and report the findings back to the local gendarmerie. Again, I am all in favor of finding, reporting and eliminating CSAM. I’m not yet convinced that Apple’s chosen way of doing this is legal.

Avatar photo

Mark Rasch

Mark Rasch is a lawyer and computer security and privacy expert in Bethesda, Maryland. where he helps develop strategy and messaging for the Information Security team. Rasch’s career spans more than 35 years of corporate and government cybersecurity, computer privacy, regulatory compliance, computer forensics and incident response. He is trained as a lawyer and was the Chief Security Evangelist for Verizon Enterprise Solutions (VES). He is recognized author of numerous security- and privacy-related articles. Prior to joining Verizon, he taught courses in cybersecurity, law, policy and technology at various colleges and Universities including the University of Maryland, George Mason University, Georgetown University, and the American University School of law and was active with the American Bar Association’s Privacy and Cybersecurity Committees and the Computers, Freedom and Privacy Conference. Rasch had worked as cyberlaw editor for SecurityCurrent.com, as Chief Privacy Officer for SAIC, and as Director or Managing Director at various information security consulting companies, including CSC, FTI Consulting, Solutionary, Predictive Systems, and Global Integrity Corp. Earlier in his career, Rasch was with the U.S. Department of Justice where he led the department’s efforts to investigate and prosecute cyber and high-technology crime, starting the computer crime unit within the Criminal Division’s Fraud Section, efforts which eventually led to the creation of the Computer Crime and Intellectual Property Section of the Criminal Division. He was responsible for various high-profile computer crime prosecutions, including Kevin Mitnick, Kevin Poulsen and Robert Tappan Morris. Prior to joining Verizon, Mark was a frequent commentator in the media on issues related to information security, appearing on BBC, CBC, Fox News, CNN, NBC News, ABC News, the New York Times, the Wall Street Journal and many other outlets.

mark has 203 posts and counting.See all posts by mark

Secure Guardrails