China is experiencing a technological transformation as OpenClaw, an open-source AI agent, sweeps across the nation with capabilities extending far beyond conventional chatbots. Originally known as Moltbot and Clawdbot, the system can autonomously manage emails, coordinate schedules, and execute financial transactions on behalf of users. This surge in adoption, significantly accelerated by promotional campaigns from tech giants Tencent and Alibaba, reflects a global shift toward action-oriented AI systems first observed in the United States earlier this year.
The phenomenon, colloquially termed ‘raising lobsters’ in reference to the project’s crustacean mascot, has triggered intense debate within both industry and government circles regarding governance frameworks, security safeguards, and the inherent risks of delegating sensitive tasks to software operating with limited transparency. China’s Ministry of State Security issued unprecedented guidelines Tuesday, warning that while OpenClaw delivers efficiency gains, it simultaneously creates novel vulnerabilities through its broad permissions and cross-platform interactions.
Security experts emphasize that these AI agents lack professional maintenance protocols and patching mechanisms, making them susceptible to malicious plugins that can bypass controls and exfiltrate sensitive data with stealth exceeding traditional trojans. The National Computer Network Emergency Response Technical Team had previously alerted on March 10 about OpenClaw’s vulnerability to ‘prompt injection’ attacks, where hidden instructions trick the AI into harmful actions.
Unlike static large language models such as ChatGPT, OpenClaw represents a new class of agentic AI that connects messaging platforms, language models, email accounts, storage devices, and e-wallets to execute end-to-end tasks with minimal human intervention. Its open-source nature and local deployment capability provide greater flexibility than proprietary alternatives like Beijing-based Manus, but also introduce greater complexity and security responsibilities.
The rapid adoption has exposed critical security gaps, with many users deploying the technology without basic safeguards. Security professionals recommend treating AI agents as digital employees with strict governance, implementing least privilege access, encryption, audit logs, and sandboxed virtual environments. As US tech giants advance similar capabilities through partnerships like Apple-Google’s integration of Gemini models, China faces urgent regulatory challenges in establishing AI governance comparable to the EU’s comprehensive AI Act.
