Congress Responds to Deepfake Nudes
In earlier posts, we discussed how to legally respond to AI-generated fake nudes and some developments in the area of AI-generated pornography. Congress has jumped into the fray, with Rochester, New York congressman Joe Morelle and Deb Ross of North Carolina reintroducing the “Preventing Deepfakes of Intimate Images Act,” H. R. 3106; the bill was reintroduced last week.
The legislation amends the Violence Against Women Act to create both a civil cause of action, the right to equitable relief and a new crime for those who display “intimate images” generated by AI of an identifiable individual. However, as noted below, it’s not clear whether the remedies are designed to be implemented against threat actors and attackers who post these offending images, against social media and other sites that host them, or both.
The Civil Damages Provision
The proposal would allow “an individual who is the subject of an intimate digital depiction that is disclosed, in or affecting interstate or foreign commerce or using any means or facility of interstate or foreign commerce, without the consent of the individual, where such disclosure was made by a person who knows that, or recklessly disregards whether, the individual has not consented to such disclosure, [to] bring a civil action against that person in an appropriate district court …”
But the proposal does not define “that person.” Is the civil action against the person who created the intimate digital image, even if they did not publicly disseminate it? Is the action available against a person who posted it—even privately? Is the action against a person who “discloses” the image—or even the existence of the image? Most significantly, does the victim of AI-generated pornography have a cause of action under the statute against the internet service provider (ISP) or social media entity, or other website for “disclosing” (publishing) the image (or failing to take it down or “unpublish” it) and if so, is it the intention of congress to reverse the immunity provisions of the Communications Decency Act which note that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”? If the civil action is available against any “person” that “discloses” the image, this may include interactive computer services, as well. Thus, entities like Facebook, X, Instagram, WeChat and even LinkedIn or FaceTime could be civilly liable for the acts of their users. If so, the proposal might require any entity that acts as a carrier of images or messages (even MMS messages) to scan and block not just AI-generated images but essentially look at every message to determine whether or not anyone is disseminating prohibited images.
This is significant because the civil damages provision permits the plaintiff not only to sue in federal court (presumably where they live or where the company that hosted the image is located) not only for their actual damages (including emotional distress and likely other incidental or consequential damages) but also for any profits made by the defendant. The plaintiff may also recover “liquidated damages” in federal court of $150,000, but the proposal does not state whether that is damages per image, per posting, per distribution or per litigation. If one AI-generated image is posted and downloaded 10,000 times, that could equal damages of one and a half billion dollars—even without proof of actual harm to the victim. If internet access providers are themselves liable for the “disclosure” of the image, we can quickly see their incentive to not only remove these images but to remove any images that might cause liability—resulting in massive content moderation—for good and for bad. In addition, many states have statutes that permit “liquidated damages” to be awarded by their state courts, “but only at an amount which is reasonable in the light of the anticipated or actual harm caused by the breach, the difficulties of proof of loss and the inconvenience or non-feasibility of otherwise obtaining an adequate remedy.” The proposed statute—which, of course, applies in federal court, would allow the victim to elect either their “actual damages,” the defendant’s profit or the liquidated damages—irrespective of whether actual damages can reasonably be calculated. Again, state law wouldn’t apply here, but we can see how the statute could not simply protect people from AI-generated porn, but actually become something of a profit center—particularly if applied to tech companies with deep pockets.
But that’s not all. In addition to the actual or liquidated damages, the proposed statute would entitle the victim to both punitive damages and costs associated with bringing the action as well as attorney’s fees and litigation costs.
One problem here is the fact that many of the people posting these “fake nude” pictures (or reposting them) simply do not have the money to pay any damages whatsoever. For example, a recent incident in New Jersey in which high school boys posted fake nude images of their female classmates is one of the incidents that provided an impetus for this legislation. However, it is unlikely that 15-year-olds from Union County, New Jersey, would have the resources to pay punitive or compensatory damages, much less the $150,000 in liquidated damages. As a result, victims of this unique form of virtual revenge porn are likely to attempt to sue those with deep pockets rather than those actually responsible for posting the images.
What Kind of Images Are Protected?
The proposed statute would provide a cause of action against nude images of the “depicted individual.” If one were to take a bunch of photographs of an ex-girlfriend, upload them to an AI program and ask the program to generate nude or sexually explicit images, that would be covered by the statute. Similarly, if I were to ask an AI program to generate nude images of Jennifer Lawrence, she would have a cause of action under the proposed statute.
However, as the New York Times recently reported, it’s possible to generate photorealistic images of wholly artificial persons who are virtually indistinguishable from images of real people. The statute makes it a crime to post nude images of a “depicted individual,” defined as “an individual who, as a result of digitization or by means of digital manipulation, appears in whole or in part in an intimate digital depiction and who is identifiable by virtue of the person’s face, likeness, or other distinguishing characteristic, such as a unique birthmark or other recognizable feature, or from information displayed in connection with the digital depiction.” Thus, to the extent that a wholly synthetic image—which happens to be nude—looks like an identifiable person (whether that was the intention or not) the person “depicted” in that image has a cause of action against the person posting the image. That makes some sense to the extent that the person “depicted” in the nude image is harmed by people thinking that they posed for nude pictures, but from an intent standpoint, if a person were to ask an AI program to generate a photorealistic image of an 11th-century Anglo-Saxon noblewoman (the wife of Leofric, Earl of Mercia) naked astride a horse protesting a horse tax, to the extent that the resulting image looked like some actual woman, a lawsuit could be pursued not by Lady Godiva, but by her modern doppleganger. The statute requires neither knowledge nor intent—just that the resulting image “depict” an identifiable person.
Equitable and Injunctive Relief
Perhaps the most useful portion of the proposed legislation is the provision that provides that federal courts have the right to “order equitable relief, including a temporary restraining order, a preliminary injunction, or a permanent injunction ordering the defendant to cease display or disclosure of the intimate digital depiction.” Of course, this provision is wholly unnecessary, as federal courts always have the authority to issue injunctive or equitable relief to prevent violations of law—but it is somewhat useful to make this explicit.
The problem, though, is procedural. In most cases, the first thing (and sometimes the main or only thing) the victim wants is to have the offending images removed from wherever they are posted. Unlike the expedited procedure for removal of infringing copyrighted materials under the Digital Millennium Copyright Act, which allows copyright holders to demand the takedown of infringing works without going to court, the proposed law would require the victim to file an action in federal court, serve the defendant or give cause why the defendant was not served, ask for either a temporary restraining order, permanent injunction or declaratory judgement or obtain an order for removal. The statute provides for the same kind of immunity for interactive computer services for taking down offending content as provided under the DMCA, but it is not clear whether the statute imposed actual liability on those services if they don’t take it down or whether their immunity under the CDA remains in effect.
Indeed, the statute provides only for an injunction “ordering the defendant to cease display or disclosure of the intimate digital depiction.” If the statute intends that victims may sue ISPs and social media providers to get them to cease display of the depiction, then you will see an explosion of lawsuits against social media platforms. But it seems that the statute is intended to permit the victim to sue the person who created and posted the digital nude—and not the ISP or social media platform. If that is the case, an order directed at whomever created the image or posted it may be fruitless. In addition, the victim would then also have to sue every person who reposted the image as a defendant to get them covered by the injunction as well and ordering them to cease display—and those persons could be anywhere in the world. The proposed statute does not expressly provide for extranational jurisdiction or effect.
Thus, a victim of AI-generated revenge porn would still have to file an action in federal court, and even then could only get an order directed at the defendant to remove the offending material—not to third parties. Since removal is often a game of whack-a-mole, with new copies of the offending content popping up over time, the statute does not make it clear that a takedown order can apply to the content wherever it appears—rather than to the entity that posted the content.
Jane Doe v. John Roe
The proposed statute also permits a victim to sue anonymously under limited circumstances. The statute provides that “the court may grant injunctive relief maintaining the confidentiality of a plaintiff using a pseudonym.” Under this language, a victim may remain anonymous, but only if they only seek injunctive relief (e.g., that the materials be taken down.) If the plaintiff/victim seeks compensation, attorney’s fees, damages or punitive damages, the statute itself does not provide for anonymity. Since the victim often will not know the true identity of the person who posted the offending content (unlike where there is an actual picture that might be able to be traced to the victim or someone the victim knows who took the picture) there are likely to be a lot of cases filed by anonymous plaintiffs against anonymous defendants—at least at the TRO or permanent injunction stage.
Go Directly to Jail
The statute would make a new criminal provision under 18 USC 2252(c) which would make posting of AI-generated nude images “with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual; or with actual knowledge that, or reckless disregard for whether, such disclosure or threatened disclosure will cause physical, emotional, reputational, or economic harm to the depicted individual.” This would be a two-year felony, unless the offense affects the conduct of some government agency or proceeding (including the administration of an election or the conduct of foreign relations); or facilitates violence, in which case it is a 10-year felony.
Again, the mental state required for the criminal offense is a bit muddy here. The posting or reposting has to be done, at a minimum, with reckless disregard for whether the posting or reposting will cause some harm to the victim. That’s a reasonably low threshold for a criminal offense. If the posting “facilitates violence”—not just violence against the victim, but against anyone—whether this is the intent of the poster or not—and whether the poster or reposter knows of or intends such violence—the offense becomes a 10-year felony.
Again, it is not clear who the criminal penalties are directed at. Would an interactive service provider like Facebook or X or a search engine like Google be criminally liable for “distributing” (in that case, failing to prevent distribution) of AI-generated porn if it could be shown that they did so with “reckless disregard” for whether the distribution would cause emotional harm to the victim? If so, would this simply be corporate criminal liability, or would these facilities’ content moderation teams face prosecution? This seems inconsistent with the Supreme Court’s decisions last term in Twitter v. Taamneh and Gonzalez v. Google, holding interactive service providers immune from civil terrorism liability merely for permitting or even amplifying the messages of terrorist organizations. While these cases involved a different statute, the proposed new anti-AI porn statute, to the extent it is intended to impose a duty—punishable by prosecution—for internet providers to remove AI-generated porn, will undoubtedly result in these entities having to examine each and every image posted by every person to screen for AI-generated porn. Fortunately, they may have AI tools available to do that in the future.
Whenever there is a new problem—like AI-generated porn—regulators and legislators look for new solutions. Undoubtedly, the law, if passed, will look a lot different than it does right now. The bill was introduced into congress last year and failed to pass, but that was before the exponential growth of AI itself and of fear of our robot overlords.