‘Clearly me’: AI drama accused of stealing faces

The rapid expansion of artificial intelligence has opened a new chapter of ethical and legal uncertainty, highlighted by a recent high-profile case in China’s booming microdrama industry, where two creators have accused a viral AI-generated series of stealing their likenesses without consent to portray villainous characters.

Christine Li, a 26-year-old model and social media influencer based in Hangzhou, never auditioned for, nor agreed to appear in, the AI microdrama *The Peach Blossom Hairpin*. The show, which premiered last month on Hongguo — a leading short-form microdrama platform owned by ByteDance, the parent company of TikTok — gained significant traction before the controversy broke. Li only learned of her unauthorised appearance when fans reached out to alert her that the show’s lead antagonist was an obvious digital replica of her, created from public photos she had posted to her social media channels two years prior.

What made the experience even more distressing for Li was the nature of the character: her digital deepfake was scripted to commit acts of violence against other women and animal abuse. “I was genuinely shocked. It was clearly me,” Li told Agence France-Presse in an interview. “I also felt a deep fear. I kept wondering what kind of person would do something like this.”

Li is not alone in her experience. A male stylist specialising in traditional Chinese clothing and cosmetics, who requested the pseudonym Baicai to protect his privacy, also discovered his likeness had been stolen to play the role of Li’s character’s husband, another unsavoury, “sleazy” antagonist. Like Li, Baicai had shared public costume photos on Xiaohongshu, China’s Instagram-style social platform, which were used to generate his digital twin. Both individuals confirmed to AFP that their original photos bear an unmistakable resemblance to the characters featured in the series.

Baicai shared the same concerns as Li, worrying that the negative portrayal could damage his personal reputation and harm future career opportunities. “There are probably plenty of cases with unknown victims,” he noted, pointing to the widespread lack of oversight for unauthorised AI deepfake use in the fast-growing sector.

Microdramas, ultra-short online soap operas with episodes running just two to three minutes, have exploded in popularity across China and global markets in recent years. As of October 2024, Hongguo alone counts roughly 245 million monthly active users, hosting thousands of free AI-generated and live-action bite-sized shows. The industry has turned to AI as a low-cost tool to speed up production and cut expenses in the highly competitive, multi-billion-dollar market.

However, the case has exposed critical gaps in content moderation and regulatory oversight. Even after the story gained public attention and sparked widespread outcry over AI ethics, AFP confirmed that *The Peach Blossom Hairpin* remained online for days before it was removed, with the unauthorised deepfake characters only quietly swapped out after public pressure grew.

In early April, Hongguo released an initial statement confirming it had removed the series after finding producers violated platform rules and contractual agreements. In a follow-up statement released earlier this month, the platform said it would implement broader reforms to strengthen content review and creator authorisation protocols. It also noted that it had already removed 670 AI-generated microdramas that violated platform regulations, and would issue harsher penalties for repeat offenders. When contacted by AFP for comment, ByteDance directed reporters to the two existing Hongguo statements.

Two Chinese companies are linked to the production of *The Peach Blossom Hairpin*: one is associated with a verified account on Douyin, the Chinese version of TikTok, that published the series, while the other is listed as the official producer on a Chinese government registration portal. AFP contacted both firms for comment but received no response. Li and her legal team are still working with Hongguo to confirm the exact identity of the responsible creator, a necessary step before moving forward with the planned lawsuit against both the producers and the platform.

Current Chinese regulation places primary responsibility for screening potentially problematic content on hosting platforms, according to rules set by the National Radio and Television Administration. Platforms that fail to complete mandatory content reviews face forced removal of non-compliant content. If platforms are aware of intellectual property or rights infringement and fail to take action, affected individuals can report the issue to Chinese cyberspace regulators, who can levy administrative penalties, explained Zhao Zhanling, a partner at Beijing’s Javy Law Firm.

Yijie Zhao, Li’s lawyer from Henan Huailv Law Firm, noted that using AI to feature an individual in a demeaning, negative role without explicit permission may violate both portrait rights and reputation rights under Chinese law. New national regulations that took effect this month require all AI-generated microdrama content to be officially registered and licensed, but legal experts note that bad actors can still avoid accountability by registering temporary shell companies or hosting content on overseas servers to hide their activity.

While 2024 saw a Beijing Court order a company to pay compensation and issue a public apology to a celebrity whose likeness was used without permission to create an AI deepfake for inappropriate purposes, lawyers note that plaintiffs who are not public figures with high commercial value often receive relatively low compensation for such violations. For Li, the damage extends far beyond financial compensation: she worries that her connection to the controversy will damage her reputation and cost her future modelling opportunities, leaving her permanently associated with the scandal.

Baicai has not yet launched formal legal action, but he joins Li in calling for stronger regulatory and platform safeguards to prevent similar unauthorised deepfake misuse from happening to other people. The case has reignited global conversations around the risks of AI deepfake technology, which has already raised widespread concerns over job displacement for actors, as well as its misuse for scams, disinformation and non-consensual intimate content.