A sophisticated disinformation campaign leveraging artificial intelligence is targeting Western institutions and individuals, with security experts warning of a fundamental shift in geopolitical influence operations. The case of Professor Alan Read from King’s College London exemplifies this new threat landscape—a legitimate university reel was manipulated with an AI-generated voice overlay to fabricate a politicized tirade against French President Emmanuel Macron and EU leadership.
The synthetic video, which featured a nearly identical replica of Dr. Read’s voice declaring Western leaders were “aboard the Titanic which has ‘European Union’ written on its hull,” represents just one instance in a widespread Russia-linked synthetic media offensive. Security analysts note these campaigns have surged in both volume and sophistication following OpenAI’s release of its advanced Sora2 video-generation software.
According to Chris Kremidas-Courtney, defense and security analyst at the European Policy Centre, “We face systems that can generate persuasion at scale, for pennies. This represents a revolution in political influence, and none of our current governance schemes are prepared to address it.”
The synthetic videos, some garnering hundreds of thousands of views, systematically discredit EU institutions and accuse the Ukrainian government of corruption amid its ongoing defense against Russian invasion. Researchers have identified common operational patterns linking these campaigns to Kremlin-aligned disinformation units.
Competition among AI video platforms has exacerbated the problem, with smaller applications eliminating safety measures such as watermarks and offering capabilities that mainstream platforms restrict. Russian AI expert Arman Tuganbaev notes that while OpenAI attempts to prevent creation of videos targeting specific individuals, “second-tier apps will give you that option.”
The impact has been tangible across Europe. In December, AI-generated videos depicting young Polish women advocating for “Polexit” (Poland’s withdrawal from the EU) went viral on TikTok, prompting government officials to confirm Russian involvement based on linguistic evidence. Similarly, Moldova experienced coordinated synthetic media attacks against President Maia Sandu during her 2025 election campaign.
UK officials have expressed concern about potential interference in upcoming local elections, with Electoral Commission CEO Vijay Rangarajan noting that deepfakes have been “used extensively in elections worldwide.” Current legislation, including Britain’s Online Safety Act, doesn’t explicitly classify disinformation as harmful content, creating regulatory gaps.
Researchers from Clemson University documented the effectiveness of these campaigns, finding that false narratives promoted by groups like Storm-1516 (linked to veterans of the Kremlin’s “troll factory”) could capture approximately 7.5% of all discussions about Ukrainian President Volodymyr Zelensky on social media platform X within a week of deployment.
Sophie Williams-Dunning, cyber and technology researcher at the Royal United Services Institute, emphasizes that these operations “allow for a level of plausible deniability that complicates counter-influence efforts” compared to traditional state-sponsored media outlets. The evolving threat demonstrates an urgent need for updated regulatory frameworks and detection capabilities to address AI-powered disinformation in geopolitical conflicts.
