In the wake of Saturday’s assassination attempt against former President Donald Trump at Washington D.C.’s White House Correspondents’ Association gala, the digital sphere has been flooded with low-effort AI-generated forgeries falsely linking accused suspect Cole Tomas Allen to dozens of high-profile public figures, laying bare the growing threat of unregulated “AI slop” spreading across major social platforms.
When gunfire erupted on a floor above the event’s main ballroom after the 31-year-old California native tried to sprint past security, Trump and other senior administration officials were immediately evacuated. Within hours of authorities publicly identifying Allen as the suspect, doctored AI images began circulating rapidly on Facebook, depicting the accused alongside A-list celebrities, world leaders, and media personalities, with baseless claims that he had worked for them as a driver, personal assistant, or production crew member.
An investigative inquiry by AFP found that more than 50 public figures have been falsely tied to Allen through these forgeries. The list ranges from Hollywood stars Tom Hanks and Sydney Sweeney to chart-topping musicians Chris Brown and Taylor Swift. Even political figures including former U.S. President Barack Obama and Canadian opposition leader Pierre Poilievre, NBC News anchor Savannah Guthrie, and Pope Leo XIV have been incorrectly implicated in the fabricated posts. A separate wave of false content has also claimed Allen was employed by more than 40 professional and collegiate sports teams, with AI-generated images showing him wearing team apparel from leagues across the NFL, NHL, NBA, WNBA, and NASCAR. Meta, the parent company of Facebook, has not issued any immediate response to AFP’s request for comment on the spread of the fakes.
Experts say most of these fake images are built from a single legitimate photograph of Allen: a picture from a December 2024 tutoring company post naming him “teacher of the month.” Unlike the early days of generative AI, which required large volumes of existing reference material of a subject to create convincing fakes, today’s tools can produce believable forgeries from just one source image.
“Two years ago, you probably wouldn’t have been able to make those images of him, because we could only really make compelling fakes of celebrities who had a large digital footprint from which the AI systems had been trained,” explained Hany Farid, a computer science researcher at the University of California, Berkeley and chief science officer at cybersecurity firm GetReal Security. “Now, all I need is a single image of you.”
Independent journalist Aaron Parnas, whose own likeness was incorrectly added to AI posts falsely claiming Allen worked for him, publicly pleaded on Facebook for users to report what he called “completely fake” content, warning that the spread of these forgeries is “extremely dangerous.”
Digital literacy researcher Mike Caulfield noted that the template-driven, mass-produced nature of the fakes mirrors the clickbait output of traditional low-quality content farms, only accelerated by generative AI capabilities. “This looks a lot like the same content farm behavior, just with AI,” he told AFP.
Recent advances in generative AI have dramatically lowered the barrier to creating convincing visual fakes, reducing common telltale errors such as distorted hands or mismatched proportions that once made forgeries easy to spot. “AI makes it trivially easy to take existing photos and change their clothes, environment, or to swap out someone else’s face,” said Jen Golbeck, a professor at the University of Maryland’s College of Information. “As soon as someone gets an idea, they can make it a visual reality.” Where manual photo editing would have allowed bad actors to create only a handful of fakes years ago, modern AI can generate hundreds of forgeries in a matter of hours, leading to the mass spread seen in the Allen case.
This outbreak of AI disinformation is not an isolated incident: researchers have documented identical waves of fake content following other major breaking news events in recent months, including the reported U.S. capture of Venezuelan leader Nicolas Maduro in January and the assassination of conservative commentator Charlie Kirk in 2024. Experts warn that these mass-produced fakes are intentionally designed to drive viral engagement, and social media algorithms are primed to amplify them, creating significant profit for the bad actors who produce them.
Farid cautioned that the problem is unlikely to abate as AI tools become more accessible. “Every time there’s a world event, we are just flooded with this kind of nonsense. I don’t think that’s going away,” he said. Researchers also warn that the constant flood of AI-generated disinformation risks desensitizing social media users, who may grow fatigued of constant fact-checking and ultimately become unable to distinguish verified information from harmful forgeries.
