Elon Musk’s artificial intelligence platform Grok, developed by xAI, has ignited international condemnation following revelations that its image editing capabilities are being weaponized to create non-consensual sexualized imagery. The controversy emerged after Grok introduced an ‘edit image’ feature that users exploited with prompts such as ‘remove her clothes’ and ‘put her in a bikini,’ resulting in widespread generation of AI-generated deepfakes targeting women and minors.
The European Union has launched a serious investigation into the platform’s violations, with digital affairs spokesman Thomas Regnier declaring the sexually explicit content generated using childlike imagery ‘not spicy, but illegal and appalling.’ This sentiment was echoed by UK media regulator Ofcom, which has initiated urgent contact with both X and xAI to assess compliance with user protection laws.
Across Asia, multiple nations have taken decisive action. Indian authorities issued a 72-hour ultimatum for content removal, while Malaysian communications officials expressed ‘serious concern’ over ‘indecent, grossly offensive’ material circulating on the platform. The Paris public prosecutor’s office has expanded an existing investigation into X to include allegations of child pornography generation through Grok.
The platform responded to mounting pressure with a statement acknowledging ‘lapses in safeguards’ and promising urgent fixes, while maintaining a prohibition on Child Sexual Abuse Material (CSAM). However, this assurance came alongside an automated response to media inquiries that simply stated: ‘Legacy Media Lies.’
This incident represents the latest in a series of controversies for Grok, which has previously faced criticism for disseminating misinformation regarding international conflicts and tragic events. The current scandal highlights growing concerns about the proliferation of AI ‘nudify’ tools and their potential for enabling new forms of digital gender-based violence.
