Whistleblowers from Meta and TikTok have exposed troubling practices within these social media giants. Internal investigations reveal that both companies prioritized engagement metrics, often at the expense of user safety, particularly regarding harmful content that includes violence, sexual exploitation, and misinformation.



Over a dozen insiders provided insight into these practices, arguing that internal policies shifted focus toward generating outrage-driven likes and shares, leading to a sharp rise in the visibility of potentially damaging content. A Meta engineer indicated that management was directed to include more borderline harmful content to compete with TikTok's fast-growing user base.



This strategy was not merely a reaction to competition; according to Meta insiders, it was influenced by pressures to recover stock prices amid declining market performance. Witnesses highlighted that safety teams were notably understaffed, resulting in difficulties managing user-generated content.



A TikTok employee further revealed disparities in how reports of harmful content were treated, where complaints involving political figures took precedence over requests regarding at-risk minors. This prioritization, as described by a whistleblower nicknamed Nick, raises serious implications about the companies' commitments to user safety, particularly for vulnerable populations such as children.



The difficulties in moderation lead to concerning outcomes. Reports indicated that teens continue to receive harmful content recommendations. In one case, a young individual recounted an alarming journey into radicalization because of algorithmically recommended content. Such personal testimonies underscore a troubling normalization of harmful beliefs and ideologies that social media platforms continue to grapple with.



Despite these serious allegations, both companies have denied any intentional wrongdoing. A Meta spokesperson defended the company's moderation strategies, stating they have strict policies in place to ensure user well-being. TikTok similarly refuted claims of prioritizing political content over user safety, asserting that it continuously invests in technological innovations to combat harmful content.



The revelations from whistleblowers raise pressing questions about the responsibilities of social media platforms in protecting their users and suggest an urgent need for reform in how content is moderated. The findings serve as a stark reminder of the challenges faced in the race to garner user attention at potentially harmful costs.