Once Again, Meta Launches Safety Features Ahead of Hearing
December 2021 was a rough time for Facebook, before it became Meta. In September, whistleblower Frances Haugen had leaked a
damning series of documents to the Wall Street Journal, including a
presentation with slick Instagram branding and a slide titled “we make body image issues worse for 1 in 3 teenage girls.” Lawmakers wanted answers. On December 8, Instagram CEO Adam Mosseri went before Congress to defend his platform against its own research – and he came prepared. The day before the hearing, Instagram announced
new safety measures designed to protect teen mental health, which featured prominently in
Mosseri’s testimony and his
responses to lawmakers. This week, history repeated itself as Meta launched
more safety features ahead of
another congressional hearing, with CEO Mark Zuckerberg in the hot seat.
As usual, only Meta will know if these fixes actually work. The company gives
extremely limited access to third-party researchers, some of whom
have abandoned their efforts out of frustration and endorsed bills like the Platform Transparency and Accountability Act. Meta has also engaged in an aggressive
academic influencecampaign, which CfA’s Tech Transparency Project (TTP) documented in a recent
report. Without independent verification, Meta’s teen safety features are an empty promise, rendered even more flimsy by Zuckerberg’s
known opposition to wellbeing proposals that reduce user engagement. It’s an arrangement that gives social media platforms all the cards.
The upcoming Senate Judiciary Committee hearing, titled
Big Tech and the Online Child Sexual Exploitation Crisis, will also by attended by the CEOs of X, TikTok, SnapChat, and Discord. TTP will be covering their questioning live on
X,
BlueSky, and
Threads, starting January 31 at 10:00am ET.