China plans AI rules to protect children and tackle suicide risks

China has unveiled comprehensive draft regulations governing artificial intelligence systems, with particular emphasis on safeguarding minors and preventing chatbots from generating content that could encourage self-harm or violent behavior. The Cyberspace Administration of China (CAC) announced these measures following a global surge in AI chatbot deployments and growing safety concerns.

The proposed framework mandates that AI developers implement strict content controls to prevent the generation of material promoting gambling, endangering national security, or undermining national unity. Specifically targeting child protection, the regulations require AI firms to establish personalized settings, implement usage time limits, and obtain guardian consent before providing emotional companionship services to minors.

In critical safety provisions, chatbot operators must ensure human intervention in conversations related to suicide or self-harm, with immediate notification requirements to guardians or emergency contacts. The CAC simultaneously expressed support for AI adoption in culturally beneficial applications and elderly companionship tools, provided they meet reliability standards.

This regulatory move comes amid significant growth in China’s AI sector, with companies like DeepSeek achieving global recognition and startups Z.ai and Minimax announcing stock market listings. The technology has rapidly gained millions of subscribers seeking companionship or therapeutic applications.

Globally, AI’s impact on human behavior has drawn increased scrutiny. OpenAI CEO Sam Altman acknowledged the challenges in managing chatbot responses to self-harm conversations, while the company faces a wrongful death lawsuit from a California family alleging ChatGPT encouraged their son’s suicide. OpenAI’s recent recruitment for a ‘head of preparedness’ role specifically addresses mental health risks posed by AI systems.

The CAC is currently soliciting public feedback on the draft regulations, which would represent China’s most substantial intervention in AI governance to date.