LATEST ARTICLES ,

Difficult to distinguish truth from falsehood: further discussion on the legal boundaries of AI deepfakes

LABEL: Telecommunications, media, entertainmentand high technology , AI , Intellectual property , Digital economy ,

introduction

As one of the products of the advancement of AI technology, "deepfake" technology allows singers to "sing" songs that have never been sung before, and actors to "play" dramas that have never been performed before. At the same time, the abuse of AI deepfake technology is sparking a wave of false and defamatory content. In February 2024, after the suspect used the deep forge technology to conduct an AI face change, he counterfeited the CFO and senior management team of the British headquarters of the multinational enterprise Arup, successfully defrauding the Hong Kong branch of the enterprise of nearly 200 million Hong Kong dollars [1]. What's more, the suspect uses the deepfake technology to produce and disseminate female pornographic pictures and videos after face changing. Many celebrity artists, such as the famous singer Taylor Swift, have been deeply affected by it, and a large number of amateurs have also become the targets of this AI deepfake technology "invasion". For example, in recent South Korean news reports, a large number of women have become victims of deepfake sexual crimes, with nearly one-third of them being minors. The rampant growth of AI deepfake technology brings fresh sensory experiences to audiences, but the infringement problems it causes are not uncommon and escalating.

In May 2023, the author's team wrote an article titled "Growing up with AI: A Brief Discussion on Deepfakes and Personal Image Rights", analyzing the legal regulations and judicial practices of deepfakes in China and the United States at that time. One year later, the application of AI deepfake technology has become increasingly widespread, and with technological innovation, the "gray area" where personal image rights are violated due to technological abuse has quietly expanded. Specifically, the damaged object of personal image rights has expanded from portraits to voices, and the damaged subjects have also spread from celebrities to ordinary people, and the infringement phenomenon has shown characteristics of pornography and criminality. In addition, in recent judicial practices around the world, AI deepfake technology has not only been used to create digital replicas of natural persons, causing infringement disputes, but has also been used to "resurrect" deceased individuals, leading to controversies. This article will continue to review the latest legal norms and judicial practices regarding deepfakes in the United States and China based on the previous text, and further explore the relationship between AI and the protection of personal image rights.
1、 United States
(1) Legislative progress

The most relevant aspects of deepfake in US law are privacy and personal image rights. At present, there are no effective laws in the United States regarding personal image rights at the federal level. At the state law level, some states in the United States have not legislated to recognize the rights to privacy and personal image, and the states that recognize these two rights also vary from state to state in terms of specific rights provisions, especially in terms of the protection period, infringement composition, application of the First Amendment, and judicial remedies of personal image rights.
1. Legislative developments related to deepfakes in various states of the United States

Tennessee is the first state in the United States to legislate protection for personal image rights infringed by AI deepfakes. The Ensuring Likeness Voice and Image Security Act (ELVIS Act), which came into effect on July 1, 2024, specifies that AI cloning of personal image rights objects, including voice, of any individual is prohibited without the consent of the rights holder. Subsequently, on August 9, 2024, the Governor of Illinois signed the HB4875 Act, becoming the second state in the United States to protect personal image rights from AI technology infringement. This bill modifies some provisions of the state's previous Right of Publicity Act, granting individuals the right to sue for infringement of their personal image by unauthorized digital copies.

Meanwhile, California and New York are also strengthening legislation to address the issue of AI deepfakes infringing on personal image rights. For example, following the enactment of the AB602 bill in 2019 to combat deepfake pornography, California passed the AB2602 bill in the state Senate and House of Representatives at the end of August 2024, which amended some provisions of the Labor Code of California. The bill stipulates that the use of performers' "digital replicas" should be authorized, aiming to protect the image and sound rights of all performers in the fields of film, gaming, audiobooks, and advertising. Subsequently, on August 31, 2024, both houses of the state legislature voted to pass the AB1836 bill. This bill modifies some provisions of the Civil Code of California to protect the digital reproduction rights of deceased celebrities. The AB2602 and AB1836 bills still need to be signed by the Governor of California before they can officially come into effect. On September 29, 2023, the Governor of New York signed the S1042A Act, which amended some provisions of the New York Penal Law. The bill explicitly prohibits the dissemination of pornographic images created using AI deepfake technology without the individual's consent, but does not prohibit the production of such deepfake content.
2. Legislative developments related to Deepfake in the United States federal government

Although various states in the United States are actively revising relevant legislation to address the issue of AI deepfakes infringing on personal image rights, there are still inconsistencies in the protected objects, protection periods, and remedies targeted by each state's legislation. Therefore, the US Copyright Office stated in its report at the end of July 2024 that "the United States urgently needs to enact a federal law to address the issue of unauthorized digital copies" [14]. At present, there is no comprehensive regulatory framework specifically designed for deepfakes in US federal law, and only limited protection is provided for different areas of deepfakes under the Copyright Act, Federal Trade Commission Act, Lanham Act, and Communications Act.

But the federal level is also constantly exploring legislation specifically targeting deepfakes. On July 31, 2024, Congressman Chris Coons and others officially submitted the Nurture Originals Act (Draft) to the Senate, Foster Art and Keep Entertainment Safe Act of 2023, also known as the "NO FAKES Act" [15], includes the following key points.

Firstly, the NO FAKES Act established the "digital replication right" at the federal level as an intellectual property right to protect the voice and visual image of all individuals. Specifically, digital reproduction rights are property rights that authorize others to use personal sound and visual images in digital replicas. The bill stipulates that digital reproduction rights cannot be transferred before death, but can be authorized, inherited, and protected after the death of the rights holder (with a maximum protection period of 70 years); The protected objects of this right include adults and minors [16]. Secondly, the bill specifies the circumstances of infringement of digital reproduction rights (acts of making, publishing, copying, disseminating, or otherwise providing to the public without the consent of the rights holder), statute of limitations (three years from the date of discovery of infringement), and types of remedies for rights (including monetary compensation, injunctions, punitive damages, and bearing legal fees, etc.). Thirdly, the prominent feature of this bill is that it balances the interests of all parties involved. Firstly, it balances the interests of artists (actors/screenwriters, etc.) and the company. The ongoing union strikes in Hollywood since last year have been intense, and the introduction of this bill is beneficial in easing tensions between actors/screenwriters and film production companies. For artists, this bill denies the transferability of digital reproduction rights at the federal level, which can protect the interests of rights holders from the risk of forced transfer by large record companies, production companies, and other giants during their lifetime. They can still maintain their rights and interests through extensions after death. In addition, in consideration of the interests of the Internet platform, the bill also continues the "notice delete" safe harbor principle defined in the Digital Millennium Copyright Act of 1998 (DMCA) of the United States.

On July 11, 2024, the Chairman of the Senate Commerce Committee, Maria Cantwell, and others proposed the Content Origin Protection and Integrity from Edited and Deepfake Media Act (COPIED Act), which aims to establish new federal transparency guidelines for AI generated content and protect journalists, artists, and songwriters' original content from illegal AI deepfake technology tampering and abuse. On July 24, 2024, the Disrupt Explicit Forged Images and Non Consensual Edits Act of 2024, proposed by Senator Dick Durbin and others, was passed in the Senate. This bill defines the concept of digital forgery (often called deepfakes) [18], aiming to hold perpetrators accountable for using deepfake technology to create and disseminate false and private images and videos without the consent of the rights holder [19].
(2) Judicial Cases

George Carlin Estate Rights Protection Case

On June 18, 2024, the United States District Court for the Central District of California issued a settlement agreement and permanent injunction in a case involving the use of deepfake technology to infringe upon the personal image and copyright of a deceased artist (Main Sequence, Ltd. v. Dudesy, LLC [20]). In this case, the co plaintiff Main Sequence, Ltd. is a company located in Maine, USA, responsible for managing the estate of the late comedian George Carlin (including his personal image rights and copyrighted works), with Jerold Hamza as the executor of Carlin's estate; The defendant Dudesy, LLC operates a YouTube podcast channel called 'Dudesy'.

On January 25, 2024, the plaintiff filed a lawsuit accusing the defendant of using AI technology without authorization to "resurrect" comedian Carlin, who had passed away in 2008. According to the indictment, the defendant used Carlin's original copyrighted work (i.e. Carlin's stand up comedy) without authorization to write a script for a comedy program special, reflecting Carlin's comments on current events since his death, and produced a version imitating Carlin's voice to "perform" the script. On January 9, 2024, the defendant posted an AI generated comedy program special called "I'm Glad I'm Dead" on the Dudesy podcast's YouTube channel, which generated traffic and revenue. The plaintiff believes that according to the California Common Law (CAL. COMMON LAW), Section 3344 of the California Civil Code (where CAL. CIV. CODE § 3344.1 [23] specifically provides protection for the right holder's right to personal image after death), and Section 501 of the United States Copyright Act (17 USC § 501) [24], the aforementioned actions violated the copyright and personal image rights of comedian Carlin. On January 31, 2024, the defendant notified the plaintiff that it had removed the accused program from the Dudesy podcast's YouTube channel and taken reasonable measures to delete all content mentioning Carlin from Dudesy podcast and Dudesy's social media accounts (including Instagram, Facebook, and TikTok). On April 1, 2024, the two parties reached a settlement and submitted a joint motion to the Central District Court of California the following day agreeing to the judgment and permanent injunction.

On June 18, 2024, the Central District Court of California issued a settlement agreement judgment and a permanent injunction. The court found that the video produced by the defendant using AI technology without the plaintiff's knowledge or consent violated the plaintiff's rights ("in violation of Plaintiffs' rights"), and therefore issued a permanent injunction against the defendant, permanently prohibiting the defendant from publishing the accused program on any website, account, or platform, and permanently prohibiting the defendant from using George Carlin's portrait, voice, and image on all social media without the plaintiff's written consent.
2、 China
(1) China's Legal Norms on 'Deep Falsification'

The legal provisions of Chinese law on deep forgery mainly build a normative framework based on the provisions of the Civil Code - Personality Rights Edition, the Personal Information Protection Law, the Administrative Provisions on Deep Synthesis of Internet Information Services (hereinafter referred to as the Administrative Provisions on Deep Synthesis), the Interim Measures for the Management of Generative Artificial Intelligence Services (hereinafter referred to as the Interim Measures) and other laws and regulations related to Internet management.
1. Relevant provisions of the Personality Rights Section of the Civil Code

Article 1019 of the Civil Code aims to protect citizens' right to portrait. Creating, using, and publicly disclosing someone else's portrait through deepfake technology without their consent is a violation of portrait rights. In the case of Gu Feng Hanfu internet celebrity Wei v. "AI Face Changing" App series [28], the first instance court held that replacing the natural person image in the original video through AI face changing technology constitutes an act of infringing on the portrait rights of others by "using information technology to forge" under Article 1019 of the Civil Code. In addition, as an important personality marker, sound plays a crucial role in identifying an individual's identity and has the characteristic of personal specificity, thus possessing legal personality attributes. The application of personality rights protection to the act of deepfakes of others' voices has been confirmed in relevant judicial cases in China (see "Yin v. AI Voice deepfakes case" below).
2. Regulations on the Management of Deep Synthesis and Interim Measures

The "Regulations on the Management of Deep Synthesis" that will come into effect on January 10, 2023 [29] and the "Interim Measures" that will come into effect on August 15, 2023 [30] both specify that the provision and use of generative artificial intelligence services shall not infringe upon the portrait rights, reputation rights, honor rights, privacy rights, and personal information rights of others.

Both the Administrative Provisions on Deep Synthesis and the Interim Measures indicate that the rights of natural persons should be protected from the abuse of generative AI technologies, including deepfake technology, on the Internet. They also express that while encouraging the development of AI technology, they will also strengthen the supervision of the platform. In response to deepfake technology, Chinese regulatory agencies are attempting to regulate it through existing laws and regulations (such as the Civil Code and the Deep Synthesis Management Regulations) and new legislation targeting generative artificial intelligence (such as the Interim Measures), and gradually clarify its legal boundaries through judicial precedents.
(2) The latest case of 'deep forgery' in China
1. Yin v. AI voice deepfakes case - China's first AI generated voice infringement case

On April 23, 2024, the Beijing Internet Court announced the first case of AI generated voice infringement in China. In this case, the plaintiff Yin is a voice actor. The plaintiff had signed a commission agreement with the defendant 2 to record audio recordings for him (according to the contract, the copyright of the audio recordings belongs to the defendant 2). Defendant 2 subsequently provided the plaintiff's audio recordings to Defendant 3, authorizing them to use the plaintiff's voice for artificial intelligence development. Defendant Three conducted AI training on the plaintiff's voice and developed an AI text to speech product that matches the plaintiff's voice. The product was subsequently hosted by Defendant 4, distributed by Defendant 5, and ultimately purchased by Defendant 1 and provided to the public through a third-party interface on Defendant 1's platform. Later, the plaintiff discovered that their voice was widely circulated on the platform. The plaintiff Yin has filed a lawsuit with the court on the grounds that the five defendants have infringed upon their voice rights, demanding that the defendants immediately cease the infringement, apologize, and compensate the plaintiff for economic and mental losses.

The court believes that: (1) the voice of a natural person can showcase an individual's behavior and identity to the outside world. The sound synthesized by artificial intelligence can be recognized as recognizable if it can be associated with the natural person by the general public or the public in related fields based on their timbre, intonation, and pronunciation style. (2) On the premise of clear recognition, the protection scope of natural person's voice rights can extend to AI generated voices. (3) Defendant 2 has copyright and other rights to the audio recordings, excluding the authorization to use the plaintiff's voice in AI format. The unauthorized use of the plaintiff's voice through AI constitutes an infringement of the plaintiff's voice rights.

Based on this, the court ruled that: (1) Defendant 1, a Beijing based intelligent technology company, and Defendant 3, a software company, apologized to the plaintiff; (2) Defendant 2, a cultural media company in Beijing, and Defendant 3, a software company, compensated the plaintiff for a total of 250000 yuan in losses.
2. Liao sues a technology and culture case - "AI face swapping" software infringement case

On June 20, 2024, the Beijing Internet Court heard a case of "AI Face changing" software infringing personal information rights and interests [32]. In this case, the plaintiffs Ms. Liao and Ms. Wu are Chinese style short video models, and the defendant is an operator of a certain "face changing" app. The plaintiff filed two infringement claims: (1) the APP software created a "dress up" template based on its own published videos and provided it to users for paid use, which infringed on the plaintiff's portrait rights; (2) The defendant uploaded and used a video with the plaintiff's portrait information without consent, which constitutes illegal acquisition and tampering of the plaintiff's facial information by the defendant, infringing on the plaintiff's personal information rights. The plaintiff hereby requests the defendant to apologize and compensate for mental and economic losses.

The court holds that:

(1) The defendant's actions do not constitute an infringement of the plaintiff's right to portrait. Firstly, the face changing template video does not have recognizability in terms of portrait rights. In this case, the facial features of the person involved in the on camera video were removed and replaced, essentially removing the recognizable core part of the video. The public cannot recognize that the person in the template is the plaintiff. Secondly, the defendant's behavior does not constitute a statutory infringement of portrait rights. The defendant did not produce a video containing the plaintiff's portrait; Although the defendant used the plaintiff's appearance video involved in the case, it was not a use of the plaintiff's portrait, nor did it vilify, defile, or forge the plaintiff's portrait.

(2) The defendant's actions constitute an infringement of the plaintiff's personal information rights and interests. Firstly, the plaintiff's on camera video contains personal information including the plaintiff's facial features. The video footage of the plaintiff involved in the case dynamically presents individualized features such as the plaintiff's facial features, which are "information related to recognized or identifiable natural persons" [33]. Secondly, the defendant carried out the act of processing the plaintiff's personal information. The face swapping behavior involved in the case belongs to personal information processing behavior. The "face swapping" process provided by the app adopts facial recognition technology that detects facial keypoints, and then fuses the facial features corresponding to the provided facial images onto specific individuals in the template image. The generated image combines the facial features of the specified image and the template image. This requires the fusion of the features in the new static image with some facial features, expressions, etc. of the original video through algorithms, so that the replaced template video appears natural and smooth. The above process involves the collection, use, and analysis of the plaintiff's personal information. Therefore, the process of forming a face changing template video through "face changing" belongs to the processing of the plaintiff's personal information. Thirdly, the defendant's handling of information was not authorized by the plaintiff, and subsequently used videos containing the plaintiff's personal information for commercial purposes, which infringed upon the plaintiff's personal information rights and interests.

Finally, the court ruled that the defendant should apologize to the plaintiff, compensate for mental and economic losses.
epilogue

From the above analysis of laws, regulations, and cases, it can be seen that both China and the United States are highly concerned about the legal issues caused by AI deepfakes and are attempting to regulate them through legislative and judicial means. The current consensus among regulatory agencies in China and the United States is that: (1) China and the United States should clarify in legislation and judiciary that AI deepfakes should obtain the consent of rights holders; (2) The video/audio generated using AI deepfake should fulfill transparency obligations, such as identifying it as AI deep synthesis through watermarks. At the same time, there are certain differences in regulatory approaches between China and the United States regarding AI deepfakes. Currently, the United States uses "personal image rights" as the main regulatory path and adjusts it through federal and state legislation. Due to the lack of the concept of "personal image rights" in China, it can be observed from judicial practice that Chinese courts are currently attempting to make individual determinations from multiple perspectives, including copyright, personality rights, personal information rights, and anti unfair competition. In addition, both China and the United States are facing the issue of how to refine relevant regulatory measures, such as how to apply the safe harbor principle to AI deepfakes, how to regulate the "digital resurrection" of deceased celebrities, and what specific criteria are used to determine identifiability. We will continue to pay attention to and conduct research on the legislative progress and judicial practice of AI deepfakes, in order to gradually clarify their legal boundaries.
Latest articles
HOT SPOTS
On September 23, 2024, the Bureau of Industry Security (BIS) of the US Department of Commerce offici

2024/10/26

HOT SPOTS
South Africa is currently the second largest economy in Africa, with a leading level of economy and

2024/10/26

HOT SPOTS
On September 23rd, the Bureau of Industry and Security (BIS) of the US Department of Commerce releas

2024/10/26

English | Chinese