A groundbreaking investigation based on testimony from over a dozen whistleblowers reveals how major social media platforms deliberately amplified harmful content to compete in the algorithmic engagement race sparked by TikTok’s unprecedented growth.
Internal documents and insider accounts obtained by the BBC demonstrate that Meta (parent company of Facebook and Instagram) and TikTok made conscious decisions to prioritize engagement metrics over user safety. According to a Meta engineer who spoke anonymously, senior management explicitly instructed teams to allow more ‘borderline’ harmful content—including misogyny and conspiracy theories—in user feeds to better compete with TikTok’s viral appeal. ‘They sort of told us that it’s because the stock price is down,’ the engineer revealed.
The algorithmic competition intensified when Meta launched Instagram Reels in 2020 as a direct response to TikTok’s pandemic-era dominance. Matt Motyl, a former senior Meta researcher, confirmed that Reels was launched without sufficient safeguards. Internal research documents show that comments on Reels contained 75% more bullying and harassment, 19% more hate speech, and 7% more violence or incitement compared to regular Instagram feeds.
Meanwhile, at TikTok, a trust and safety team member (identified as Nick) provided unprecedented access to internal dashboards showing how cases involving politicians were systematically prioritized over serious complaints about harm to children. In one alarming example, a political figure mocked through chicken comparisons received higher priority than a 17-year-old cyberbullying victim in France or a 16-year-old Iraqi girl facing sexual blackmail through impersonated images.
‘The urgency is not high,’ Nick commented regarding the Iraq case, noting that despite the high-risk nature of sexual blackmail involving a minor, the system classified it as lower priority (P2). He revealed that when staff requested to prioritize cases involving young people over political cases, management instructed them to maintain the existing ranking system.
Ruofan Ding, a former machine-learning engineer who worked on TikTok’s recommendation algorithm from 2020-2024, described the system as a ‘black box’ whose internal workings were difficult to scrutinize. ‘We have no control of the deep-learning algorithm in itself,’ Ding stated, explaining that engineers viewed content merely as numerical IDs rather than actual material, relying entirely on safety teams to remove harmful posts before algorithmic promotion.
The human cost of these decisions is starkly illustrated by cases like Calum, now 19, who reported being ‘radicalized by algorithm’ from age 14. The algorithmic recommendation system exposed him to content that amplified racist and misogynistic views. ‘They just made me very kind of angry. It very much reflected the way I felt internally,’ Calum recounted.
Counter-terror police specialists in the UK confirm they’ve observed the ‘normalization’ of antisemitic, racist, violent and far-right content in recent months, with one officer noting that ‘people are more desensitized to real-world violence and they are not afraid to share their views.’
Both companies have denied the whistleblowers’ allegations. Meta stated: ‘Any suggestion that we deliberately amplify harmful content for financial gain is wrong,’ while TikTok called the claims ‘fabricated’ and emphasized their investments in safety technology. TikTok specifically rejected the idea that political content is prioritized over young people’s safety, stating this ‘fundamentally misrepresents the way their moderation systems operate.’
Despite these denials, the internal documents and firsthand accounts paint a consistent picture of platforms making calculated trade-offs between user safety and engagement growth, with particularly severe consequences for teenage users and vulnerable populations worldwide.
