Social media giants deliberately allowed more harmful content on users’ feeds after internal research showed outrage-driven posts fueled engagement, according to more than a dozen whistleblowers who revealed how companies sacrificed safety to compete for attention and market share.
An engineer at Meta, which owns Facebook and Instagram, described how senior management directed him to allow more “borderline” harmful content, including misogyny and conspiracy theories, in users’ feeds to compete with TikTok’s explosive growth. “They sort of told us that it’s because the stock price is down,” the engineer said, revealing that financial pressure drove decisions to relax content standards.
A TikTok employee provided rare access to the company’s internal dashboards of user complaints, along with evidence showing staff were instructed to prioritize cases involving politicians over numerous reports of harmful posts featuring children. The decisions aimed to “maintain a strong relationship” with political figures to avoid threats of regulation or bans, not to protect users from harm, the staffer said.
Matt Motyl, a senior Meta researcher, said Instagram Reels launched in 2020 without sufficient safeguards as the company rushed to compete with TikTok’s video format. Internal research showed comments on Reels had significantly higher prevalence of bullying and harassment, hate speech, and violence or incitement than elsewhere on Instagram. Meta invested in 700 staff to grow Reels while safety teams were refused two specialists to protect children and 10 more to safeguard election integrity, according to a former senior Meta employee.

Motyl shared dozens of high-level research documents exposing harms to users across platforms. Among them was evidence that Facebook knew its algorithm created problems by offering content creators a “path that maximizes profits at the expense of their audience’s wellbeing.” An internal study concluded that “the current set of financial incentives our algorithms create does not appear to be aligned with our mission” to bring the world closer together. The study warned Facebook could “choose to be idle and keep feeding users fast-food, but that only works for so long.”
Meta responded to the whistleblowers’ claims by stating, “Any suggestion that we deliberately amplify harmful content for financial gain is wrong.” TikTok called them “fabricated claims” and said the company invested in technology that prevented harmful content from being viewed. However, teenagers reported that systems allowing users to block problematic content are not working, and they continue receiving recommendations for violence and hateful content on major platforms.
In one extreme case, a teenager named Calum, now 19, said he had been “radicalized by algorithm” starting at age 14. The algorithm showed him content that outraged him and led him to adopt racist and misogynistic views. The videos “energized me, but not really in a good way. They just made me very kind of angry. It very much reflected the way I felt internally, that I was angry at the people around me,” he said.
Counter-terror police specialists in the UK who analyze thousands of posts annually say they have witnessed the “normalization” of antisemitic, racist, violent and far-right posts in recent months. “People are more desensitized to real-world violence and they are not afraid to share their views,” one officer said, pointing to how algorithmic amplification of extreme content has shifted what users consider acceptable to post publicly.


