Deepfakes, Takedowns, NO FAKES Act, ScarJo & OpenAI

Christa Laser
May 22, 2024
5 min read

It has been nearly three years since I wrote this piece urging the passage of a federal right of publicity act that provides notice and takedown of unauthorized deepfakes of a recognizable person (with Eric Goldman authoring counterpoints).  This week, social media is ablaze with news of a potential violation of right of publicity: OpenAI allegedly using Scarlett Johansson’s voice as the “Sky” voice of ChatGPT without her authorization.  The Senate is considering a discussion draft of the NO FAKES Act, which would provide a limited federal right of publicity to prevent unauthorized digital replicas of a person, like that ScarJo alleges she experienced.  The Senate Committee on the Judiciary heard testimony on the topic on April 30, 2024 and is continuing to rework the draft. In the meantime, state law applies to govern these issues.  Should we have a federal right that encourages takedown of unauthorized digital reproductions of a person?

As I explained in my prior post, notice and takedown encourages internet platforms like YouTube to remove unauthorized reproductions or infringements of copyrighted works that are shared by users on their platforms, upon notice by the copyright holder or their representative of the unauthorized use.  Notice and takedown would likewise be useful as a tool to promptly eliminate unauthorized deepfakes of an individual that are posted without their permission.  Indeed, the harms from some types of deepfake content, like nonconsensual pornography or content that falsely shows an individual inflicting or suffering from graphic violence without their consent, could rise to levels far outpacing the harms that result to artists and creators from unauthorized distribution of their copyrighted content online.  The same types of laws should be available to help protect against deepfakes, least controversially for synthetic sexual or violent imagery.

Although the draft bill has much work ahead, including to address First Amendment balancing and transfers of rights to others, its framework could be modified to add a notice and takedown provision.  Structuring this after the Digital Millenium Copyright Act, first the Act could place liability on internet platforms that host the type of content that violates the Act.  Then, the Act could provide a safe harbor, eliminating vicarious-like liability for platforms that have notice and takedown procedures and follow them to promptly remove unauthorized replicas of which they are notified.  Platforms that are directly responsible for making unauthorized replicas, such as if a platform uses someone’s voice without their authorization to train the voice of their AI output, would not be able to use the safe harbor to evade their own direct liability.

Section 230 of the Communications Decency Act generally protects internet platforms against liability for user-posted content.  Nonetheless, there are two possible ways to avoid conflict of a notice and takedown provision with Section 230: (1) interpret the Act as an intellectual property statute, therefore falling within the same exceptions to Section 230 that allow the Digital Millenium Copyright Act to function, or (2) impose direct statutory penalties on internet platforms for hosting unauthorized deepfakes, rather than making the platforms liable for the user’s violations.  The Senate Committee draft adopts the first approach, including a line saying, “This section shall be considered to be a law pertaining to intellectual property for the purposes of section 230(e)(2) of the Communications Act of 1934 (47 U.S.C. 230(e)(2)).”  Although it is hotly debatable whether right of publicity constitutes an intellectual property right or a personal right of privacy or tort law, courts are likely to adhere to Congress’s stated interpretation here and find that this Act falls outside the scope of Section 230.

Some concerns arise: state law generally protects the right of publicity and there are doubts about whether a federal law is needed.  Nonetheless, state laws generally only restrict commercial misappropriations, which not all deepfakes are, and do not protect takedown remedies.  If the takedown remedy is too broad, Congress could also use a similar structure to enact a notice and disclaimer rule, requiring disclosures that content is subject to a dispute about unauthorized synthetic depictions instead of takedown of the content.  To avoid abuse, the Act could impose restrictions to disincentivize false reports, such as fines or attorney’s fees if a court finds that the depiction was falsely reported as synthetic media.  This might not be enough to satisfy those who are concerned about false takedown requests in situations such as depictions of political candidates before an election.

Nonetheless, this author recommends a strong right to notice and takedown that would, in particular, protect those who are victims of unauthorized sexual or violent synthetic media of themselves. Perhaps this was a concern that Scarlett Johansson had: OpenAI had just announced that it was considering whether to offer sexual content to age-verified users. If she had agreed to authorize her voice or a sound-alike of her voice for OpenAI generally, she might have concerns about whether users will manipulate the replica of her voice to say things she would not condone or consent to have her replica say.