Social media giants like Meta and TikTok reportedly push more harmful content on users’ feeds, after their research showed outrage boosted engagement. Whistleblowers recently told the BBC about how companies allegedly promoted content related to misogyny, racism, violence, blackmail, or terrorism through their primary objective of engaging their users.
What Insiders Reveal
An engineer at Meta was allegedly told by senior management to allow more “borderline” harmful content, propagating misogyny and conspiracy theories, in users’ feeds. “They sort of told us that it’s because the stock price is down,” the engineer told the BBC.
Another employee at TikTok reveal
Social media giants like Meta and TikTok reportedly push more harmful content on users’ feeds, after their research showed outrage boosted engagement. Whistleblowers recently told the BBC about how companies allegedly promoted content related to misogyny, racism, violence, blackmail, or terrorism through their primary objective of engaging their users.
What Insiders Reveal
An engineer at Meta was allegedly told by senior management to allow more “borderline” harmful content, propagating misogyny and conspiracy theories, in users’ feeds. “They sort of told us that it’s because the stock price is down,” the engineer told the BBC.
Another employee at TikTok revealed that they had been instructed to prioritise several cases involving politicians over a series of reports of harmful posts featuring children. Decisions were made to maintain strong ties with political figures to avoid regulation or bans, not to protect users, they said.
Whistleblowers have also revealed that this is a significant issue with the algorithms of these sites, as content that elicits significant reactions is more likely to be propagated throughout the site regardless of whether it is true or not.
The “Black Box” Challenge
Another issue highlighted in these disclosures is the lack of transparency in how algorithms operate. Often described as “black boxes,” these systems process vast amounts of data in ways that are not fully understood, even internally.
Ruofan Ding, who worked as a machine-learning engineer building TikTok’s recommendation engine from 2020 until 2024, shared with the BBC, “We have no control of the deep-learning algorithm in itself.” Instead, the content is not evaluated by the algorithm but rather by the manner in which users interact with the content. This basically allows the content to spread and gain traction before its harm is realised.
Effects on Younger Users
Another issue that whistleblowers and researchers have emphasised is the effects on younger users. According to the company’s own findings, users, particularly teens, can be gradually exposed to more extreme or polarised content through recommendation algorithms.
The trend can give users the wrong idea that these views are more common or socially accepted than they actually are, which can have long-term effects on their behaviour and social views.
Moreover, the revelations also highlight the disparity between the growth trajectory and the efforts being put into enhancing safety. For instance, the need to hire more safety personnel, especially for the purpose of protecting minors, has reportedly been ignored. This may cause a delay in responding to harmful content, which may otherwise spread across the platform.
Response from the Platform
In response to the whistleblowers’ allegations, Meta rejected the claims, stating that any suggestion it deliberately promotes harmful content for profit is false. TikTok also dismissed the accusations, calling them “fabricated,” and said it has invested heavily in technology and systems designed to detect and prevent harmful content from being seen by users.
A Broader Concern
The revelations highlight a broader concern regarding the industry. The industry is growing and impacting the way people think. However, the question being asked is how the industry is going to balance the growth path with the need to ensure the safety of the users.