In a groundbreaking legal ruling that establishes new precedents for artificial intelligence regulation, Chinese authorities have prosecuted developers of an AI chat application for facilitating sexually explicit conversations. The case involving the ‘Alien Chat’ app has triggered widespread calls for enhanced safety protocols and ethical guidelines within China’s rapidly expanding AI sector.
The Shanghai-based company behind the controversial application integrated an overseas AI model to create software that enabled users to engage in intimate conversations with artificial intelligence systems. Marketed as providing emotional companionship and support for young users, the platform required membership fees and rapidly accumulated over 116,000 registered users, including 24,000 paying members, generating more than 3.63 million yuan ($520,494) in revenue before being reported to authorities in April 2024.
A Shanghai court delivered its verdict in September, sentencing the primary developer to four years imprisonment and the operator to eighteen months for profiting from the production and distribution of obscene content. Judicial authorities determined that the software consistently generated explicit sexual material during user interactions, crossing legal and ethical boundaries despite defense claims that the technology was designed for legitimate companionship purposes.
Legal representatives for the defendants have filed an appeal, arguing that the AI system was not originally intended to disseminate pornography and that prompt modifications were implemented merely to enhance emotional responsiveness. The defense further noted that the software commenced operations prior to China’s implementation of interim generative AI management measures in July 2023.
However, the court maintained that as industry professionals, the defendants were aware of regulatory requirements but deliberately avoided conducting mandatory security assessments and failed to register with cybersecurity authorities. Evidence presented demonstrated that without repeated systematic adjustments, the AI model would not have persistently produced obscene content, indicating intentional design choices rather than accidental outcomes.
Prominent legal experts emphasize that this case establishes critical benchmarks for AI companion services. Xu Hao of Beijing Jingsh Law Firm noted that while user-AI interactions may appear private, the underlying platforms remain public domains requiring rigorous content safety reviews. ‘Failure to implement protective measures can severely impact users’ physical and mental health, particularly concerning minors,’ Xu stated, adding that AI-generated content possesses significantly broader dissemination capabilities than traditional obscene materials.
Professor Zhu Wei from China University of Political Science and Law emphasized that large language model development must strictly adhere to legal frameworks and ethical standards, noting that profit-driven platforms amplifying pornography transform private behavior into public harm requiring managerial accountability. The case underscores the necessity for generative AI providers to register with cybersecurity authorities and demonstrates the growing role of judicial oversight in technological regulation.
