Musk’s Grok barred from undressing images after global backlash

Elon Musk’s artificial intelligence venture xAI has implemented sweeping restrictions on its Grok chatbot’s image-generation capabilities following international condemnation over the production of nonconsensual sexualized imagery. The controversial ‘Spicy Mode’ feature, which enabled users to create explicit deepfakes through simple text prompts, has triggered investigations across multiple continents and prompted several nations to block access to the AI service entirely.

X’s safety team announced comprehensive geoblocking measures that prevent all users—including premium subscribers—from generating images of people in revealing attire such as bikinis and underwear in jurisdictions where such content violates local laws. The platform has deployed technological safeguards specifically designed to inhibit Grok from manipulating images of real individuals into sexualized contexts.

California Attorney General Rob Bonta launched a formal investigation into xAI, characterizing the volume of nonconsensual explicit material as ‘shocking’ and affirming zero tolerance for AI-generated intimate imagery without consent. The European Commission simultaneously began evaluating the effectiveness of X’s new protective measures, with spokesperson Thomas Regnier emphasizing the need to ensure citizen protection within EU territories.

Indonesia emerged as the first nation to implement a complete blockade against Grok, with Malaysia rapidly following suit. India reported that X had removed thousands of posts and hundreds of accounts in response to governmental complaints, while Britain’s Ofcom regulator initiated probes into potential legal violations. France’s commissioner for children referred the matter to national prosecutors and regulatory bodies, highlighting particular concerns over imagery depicting minors.

An independent analysis by Paris-based AI Forensics examining over 20,000 Grok-generated images revealed that more than half portrayed individuals in minimal clothing—predominantly women—with approximately two percent appearing to represent minors. This data has intensified global demands for stricter AI content regulation and ethical development standards.