California Attorney General Rob Bonta has initiated a formal investigation into xAI’s Grok artificial intelligence platform following widespread reports of non-consensual sexually explicit deepfakes. The probe targets Elon Musk’s AI company for its alleged role in generating and disseminating explicit material depicting women and children without consent.
Bonta characterized the situation as an ‘avalanche’ of disturbing content that has been weaponized for online harassment. The investigation emerges alongside international scrutiny, with British Prime Minister Keir Starmer warning of potential regulatory action against X platform and UK communications regulator Ofcom launching its own parallel investigation.
xAI maintains that users prompting illegal content face consequences equivalent to those uploading prohibited material directly. Musk personally denied awareness of any underage imagery generated by Grok, emphasizing the tool only produces content upon specific user requests rather than spontaneously.
The controversy has triggered broader legal debates regarding platform accountability. Legal experts question whether Section 230 protections—which traditionally shield online platforms from liability for user-generated content—apply to AI-generated imagery. Cornell University Professor James Grimmelmann argues that when platforms themselves generate content, they exceed Section 230’s protective scope.
Political responses have intensified with three Democratic senators requesting Apple and Google remove X and Grok from their app stores. Although both platforms remain available, X subsequently restricted its image generation feature to paying subscribers only. The developments occur as the UK prepares legislation criminalizing non-consensual intimate imagery creation, with potential fines reaching 10% of global revenue for violations.
