In a significant policy shift, social media platform X has announced stringent measures against creators who disseminate artificially generated conflict footage without proper disclosure. The Elon Musk-owned company revealed Tuesday that participants in its Creator Revenue Sharing program will face 90-day suspensions for posting AI-generated videos of armed conflicts without clear labeling.
The decision emerges amid escalating geopolitical tensions involving the United States, Israel, and Iran, where synthetic media poses unprecedented challenges to information integrity. Nikita Bier, X’s Head of Product, emphasized the critical need for authentic battlefield information during wartime, noting that current AI technologies have made it “trivial to create content that can mislead people.”
This policy reversal marks a notable departure from X’s previous stance on content moderation. Since Musk’s $44 billion acquisition of the platform (formerly Twitter) in October 2022, the company has systematically dismantled most misinformation policies, characterizing them as forms of censorship. The new framework introduces escalating penalties, with repeat offenders facing permanent removal from the revenue program that distributes advertising earnings to eligible creators.
Enforcement will leverage multiple detection methods, including Community Notes (the platform’s crowd-sourced fact-checking system), metadata analysis, and technical signals embedded within AI-generated content. The company confirmed ongoing refinements to both policies and technical infrastructure to enhance trust during critical global events.
