In brief

On July 31, the United States Copyright Office published the first in a planned series of reports on the intersection of intellectual property with new AI technologies. This initial report, entitled “Copyright and Artificial Intelligence: Digital Replicas”, focuses on potential legal issues arising from the emergence of digital replica technologies that falsely depict an individual or individuals, which are commonly known as “deepfakes”.  The report incorporates comments from the public in response to the Copyright Office’s August 2023 Notice of Inquiry. The report concludes that existing laws are insufficient to address the potential harms of digital replicas and proposes the enactment of new legislation to govern deepfakes.

Background

The report begins by outlining the existing legal frameworks that may apply to the creation of deepfakes. Many states have right of publicity and right of privacy statutes that restrict the publication of depictions of individuals. Indeed, some states have already amended their right of publicity laws to better govern new synthetic media technologies. At the federal level, the Copyright Act (which protects original works of authorship, including materials from which deepfakes are generated), the Federal Trade Commission Act (barring unfair or deceptive commercial practices), the Lanham Act (prohibiting deceptive and misleading uses of marks and unfair competition), and the Communications Act (which the Federal Communications Commission has used, along with the Telephone Consumer Protection Act, to restrict deepfakes in robocalls) may regulate the creation, use, and publication of deepfakes. The report also notes that private agreements, such as performance agreements, releases, or provisions in collective bargaining agreements, can be used to govern the use of individuals’ names or likenesses as between parties.

The report concludes that this mosaic of laws is insufficient to protect the interests of individuals and the public from the potential harms caused by deepfakes. State law rights of publicity and privacy provide inconsistent protections, with some states having no such laws at all. Additionally, many states limit protection to a person’s visual likeness and do not confer protection against audio deepfakes, which have proliferated in recent years as voice synthesis technology has advanced. A majority of these laws only apply to commercial uses of a person’s likeness, leaving those injured by non-commercial uses of deepfakes without redress. Relatedly, some state laws incorporate broad carveouts for news reporting, sports broadcasts, political campaigns, commentary, and satire. More broadly, because state laws each confer different rights, there is a need for uniform nationwide legislation to promote consistency and predictability.

The report also comments on the shortcomings of existing federal laws. The Copyright Act, for example, only protects works of authorship and would not generally protect the rights of individuals depicted in a work used to create a deepfake (although the author of the copyrighted work might have a cause of action). Both the Lanham Act and FTC Act are, like many state laws, limited to commercial uses. The Communications Act is likewise limited in that it only extends to content transmitted over common carrier services. The report also notes two recent attempts by Congress to pass laws regulating synthetic media: the No AI FRAUD Act and the NO FAKES Act.

Proposals

Because of these limitations the report proposes the passage of federal legislation to regulate digital replicas. The key features of the proposal include:

  • Subject Matter: The report warns against defining “digital replicas” too broadly and suggests focusing on depictions that are difficult to distinguish from—rather than merely evoke—the subjects they represent.
  • Persons Protected: The report follows public recommendations that all individuals, not just celebrities, public figures, or those whose persona carries commercial value, should be eligible to receive protection under the proposed legislation. This broad protection contrasts with some right of publicity laws, which only apply to individuals whose persona holds economic value.
  • Term of Protection: Although there was consensus in the public comments that the rights established by the proposed law should last for the lifetime of the individual depicted, there was disagreement over whether the rights should extend postmortem. The report suggests a compromise: recognizing postmortem rights for a limited initial term, which can be extended only for deceased individuals whose persona continues to be commercially exploited.
  • Infringing Acts: Under the proposed legislation activities that involve dissemination to the public would amount to a violation. While the proposal contemplates that creation of deepfakes may also be actionable, it acknowledges that the law may need to strike a balance and include a defense for legitimate and reasonable private uses. The report also notes that harms from deepfakes can occur both in respect of commercial and non-commercial uses and recommends that federal legislation not be limited to uses involving commercial exploitation.
  • Secondary Liability: The report proposes the establishment of a safe harbor modelled after section 512 of the Digital Millennium Copyright Act (DMCA), but which omits the “red flag” knowledge standard. The report also recommends that the law should fall within the intellectual property carveout to Section 230 of the Communications Decency Act (CDA) to encourage the prompt removal of unauthorized deepfakes by online service providers.
  • Licensing and Assignment: The proposal would provide individuals limited scope to transact in the newly established right; individuals would be able to license their rights, but outright assignments would be barred.  
  • Freedom of expression: The report acknowledges that digital replicas can constitute constitutionally protected speech, even where the content is objectionable, and that any legislation restricting digital replicas will need to account for the protections afforded by the First Amendment. The report declines to suggest a specific approach to reconciling the tension between regulating deepfakes and protecting freedom of expression. Instead, the report advocates for a “balancing” approach that would allow courts to take a range of factors into account in deciding whether the First Amendment protects a particular use.
  • Remedies: The report calls for a range of remedies, including injunctive relief and monetary damages, with statutory damages and attorney’s fees provisions. The proposal also includes statutory damages, analogous to those available for copyright infringement, for cases where actual damages would be difficult to prove.
  • Preemption: The report cautions against federal preemption of state laws, which would allow states to provide additional protections. The report argues that “a non-preemptive federal right can achieve some of the benefits of uniformity, but without imposing a one-size-fits-all solution on all states.”
  • Protection for Artistic Style: Although the report notes that the Copyright Office received many responses to its Notice of Inquiry expressing concerns over the ability of generative AI to create works in the style of a specific artist. Despite acknowledging the legitimacy of these concerns, the report declines to propose federal legislative action in this area, reasoning that numerous existing laws may already offer protection against such harms.
Next Steps

It remains to be seen whether Congress takes up the Copyright Office’s call to action. Although there have been numerous recent proposals at the state and federal levels to regulate digital replicas, balancing the rights of individuals in their likenesses against free speech and legitimate commercial interest remains a tall task.

More certain is the likelihood of the Copyright Office continuing to take a front seat role in probing the difficult intersection between intellectual property rights and new generative AI capabilities. We will continue to monitor and report on developments on the legal impacts of AI, both from the legislatures and policymakers.

Author

Cynthia is an Intellectual Property Partner in Baker McKenzie's Palo Alto office. She advises clients across a wide range of industries including Technology, Media & Telecoms, Energy, Mining & Infrastructure, Healthcare & Life Sciences, and Industrials, Manufacturing & Transportation. Cynthia has deep experience in complex cross-border, IP, data-driven and digital transactions, creating bespoke agreements in novel technology fields.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in product counseling, technology, and M&A transactions. He leads the Firm's North America Technology Transactions group and co-leads the group globally. Adam regularly advises a range of clients on transformational activities, including the intellectual property, data and data privacy, and technology aspects of mergers and acquisitions, new product and service initiatives, and new trends driving business such as platform development, data monetization, and artificial intelligence.