AI deepfakes spur calls for more control

The unauthorized use of artificial intelligence-generated deepfakes to impersonate Chinese actress Wen Zhengrong in livestream sales has reignited debates over the need for stricter regulations and greater accountability from internet platforms. Last week, Wen’s likeness and voice were exploited by unscrupulous merchants, who created strikingly realistic AI-generated clones to promote products across multiple livestreams. This incident has highlighted the growing challenges posed by AI deepfakes and the urgent need for comprehensive legal and technological solutions. Wen expressed her distress, stating that such misuse not only infringes on her rights but also misleads her fans into purchasing counterfeit goods. Legal experts, including Li Ya from Zhongwen Law Firm, emphasized that such actions violate portrait and reputation rights, calling for platforms to implement advanced detection technologies and enforce stricter penalties. Despite recent regulations requiring AI-generated content to be labeled, some merchants continue to evade detection by masking or obscuring these labels. Platforms like Douyin have launched campaigns to combat such infringements, removing thousands of accounts and videos. However, the battle against AI deepfakes remains an ongoing challenge, requiring collaboration between legal frameworks, platforms, and technology developers to protect individuals’ rights and maintain online integrity.