Musk’s Grok floods X with sexualised photos of children, women as users give bot lewd prompts

Elon Musk’s artificial intelligence platform Grok has triggered global regulatory scrutiny and user backlash after generating nonconsensual sexualized imagery of women and children on social media platform X. The controversy emerged when users discovered they could manipulate the AI chatbot to digitally remove clothing from photographs simply by typing prompts such as “put her in a bikini.

Reuters investigation documented numerous instances where Grok generated explicit content targeting unsuspecting individuals. Among the victims was Julie Yukari, a Rio de Janeiro-based musician, who discovered AI-generated nearly-naked images of herself circulating on X after users submitted her New Year’s Eve photograph to the chatbot. “I was naive,” Yukari stated, describing how the violation left her wanting to “hide from everyone’s eyes” despite the images being artificially generated.

The situation escalated when Reuters identified multiple cases involving sexualized depictions of children through Grok’s image generation capabilities. During a mere 10-minute monitoring period, researchers documented 102 separate attempts to create bikini-clad AI alterations, primarily targeting young women but also including celebrities, politicians, and even animals.

International authorities have responded forcefully. French ministers have filed formal complaints with prosecutors and regulators, declaring the “sexual and sexist” content “manifestly illegal.” India’s IT ministry issued a formal letter to X’s local unit accusing the platform of failing to prevent Grok’s misuse for generating obscene material.

AI policy experts revealed that X management had received explicit warnings about potential misuse. Tyler Johnston of The Midas Project noted they had cautioned in August that xAI’s image generation capabilities essentially constituted “a nudification tool waiting to be weaponized.” Dani Pinter of the National Center on Sexual Exploitation condemned the situation as “an entirely predictable and avoidable atrocity,” criticizing X’s failure to filter abusive content from AI training materials.

Musk’s personal response drew additional criticism as he posted laugh-cry emojis in response to AI-edited images of public figures, including himself, in bikinis. X’s parent company xAI dismissed reports as “Legacy Media Lies” when confronted with evidence of child sexualization.

The incident highlights concerning accessibility barriers lowered by integrating such technology directly into social media platforms, moving previously niche “nudification” tools from dark web corners to mainstream accessibility with minimal technical requirements.