Framework seeks to keep AI in line

In response to the unprecedented advancements in artificial intelligence (AI), China has introduced its upgraded AI Safety Governance Framework 2.0, marking a significant evolution in its approach to AI regulation. Released on September 15, 2025, by the National Technical Committee 260 on Cybersecurity, the framework shifts from a static risk management model to a comprehensive life cycle governance methodology. This update reflects the rapid technological breakthroughs, including the development of high-performance reasoning models and the open-sourcing of lightweight AI systems, which have lowered deployment barriers while raising new security concerns. The framework emphasizes the need to ensure AI remains under human control, safeguarding national security, social stability, and humanity’s long-term survival. It introduces new governance principles, such as trustworthy AI applications and the prevention of AI systems’ loss of control. Additionally, the framework highlights emerging risks, including AI’s potential to disrupt labor markets, exacerbate resource imbalances, and even develop self-awareness. By aligning with international governance practices, such as AI-generated content labeling and traceability, China aims to contribute to global AI safety efforts and foster international cooperation.