From Project Liberty <[email protected]>
Subject Lip service or tipping point? The tech alliance to protect kids
Date May 7, 2024 2:44 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
We explore a white paper & alliance amongst tech companies to prevent CSAM by adopting "safety by design" principles in generative AI tools

View in browser ([link removed] )

May 7, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )

The unexpected alliance to protect kids online

It’s not often that the biggest and most competitive tech companies come together to form an alliance, but that’s exactly what happened last month.

Ten major tech firms committed to adopt principles that will protect children against the growing threats of online sexual abuse posed by generative AI.

This week, we’re exploring what that commitment actually means and if it could serve as a turning point toward a more proactive approach to Trust & Safety.

As a warning, this newsletter contains references to child sexual abuse. Please read with care.

// Commitments to safety

Spearheaded by the Project Liberty Alliance members Thorn ([link removed] ) and All Tech Is Human ([link removed] ) , ten tech companies (Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, and Stability AI) committed to incorporate new safety measures ([link removed] ) to protect children from online exploitation exacerbated by generative AI technology.

//

In 2023, more than 104 million files of suspected CSAM were reported in the US.

//

// The risks of generative AI for kids

On the internet, child sexual abuse material (CSAM) has been a problem for years.

- One in three ([link removed] ) internet users around the world is a child and 800 million children are actively using social media.
- In 2023, more than 104 million files ([link removed] ) of suspected CSAM were reported in the US.

But generative AI tools can create and distribute CSAM content at scale, posing a massive threat to children and families around the world.

In the white paper “Safety by Design for Generative AI: Preventing Child Sexual Abuse ([link removed] ) ,” Thorn identified four ways that generative AI tools can be exploited to exacerbate online harms to children.

- Victim identification is harder: For online CSAM, identifying the victim is already a “needle in a haystack problem” for law enforcement. But with AI-generated CSAM (AIG-CSAM), it’s even harder to identify the victim because of the ways images are blended into photorealistic permutations.
- AI creates new ways to victimize and re-victimize children: Bad actors can now easily generate new CSAM, sexualize benign imagery of children, and generate content to target children.
- More AIG-CSAM begets more AIG-CSAM: Thorn reports that the growing frequency of CSAM generates more demand for CSAM. They point to research ([link removed] ) that the more engagement there is with this material, the greater the risk for future offenses.
- AI models can make bad actors more effective: AI chatbots can provide instructions to bad actors on everything from how to manipulate victims to how to destroy evidence.

// Safety by design

Instead of responding to an offense that has already occurred, “safety by design” takes a more proactive approach. It requires tech companies to anticipate where threats may occur—from design to development to deployment—and build in the necessary safeguards.

The Thorn white paper outlines three steps for tech companies to get ahead of the problem:

- Develop, build, and train generative AI models that proactively address child safety risks. This includes responsibly sourcing training datasets free from CSAM (a report ([link removed] ) last year found CSAM in an open data set used to train AI models), conducting CSAM-oriented stress-testing during the development process, and building media provenance tools that help law enforcement track down bad actors.
- Release and distribute generative AI models only after they have been trained and evaluated for child safety. This includes responsibly hosting models and supporting the developer ecosystem in their efforts to address child safety risks.
- Maintain model and platform safety by continuing to understand and respond to child safety risks. This includes removing AI models built specifically to produce AIG-CSAM (some services “nudify” benign images, an issue that has shown up in high schools ([link removed] ) this year), investing in research to stay ahead of the curve, and detecting and removing harmful content.

David Ryan Polgar, Founder & President of All Tech Is Human, who worked with Thorn to establish the principles, said, “The biggest challenge for companies making this commitment, along with all other companies and key stakeholders, is recognizing that a thorny issue like reducing AIG-CSAM is hard but it is not impossible. The turning point that this alliance represents is a growing recognition that time is of the essence so we need to move at the speed of tech.”

// Impossible to be perfect

Tech companies are not starting from scratch. They’ve been working on Trust & Safety initiatives ([link removed] ) for years, but there is a wide chasm between pledging to do something and actually doing it.

- Even the most proactive by design efforts can’t catch everything ([link removed] ) . While the existence of CSAM can be minimized in training AI models, it’s far harder to clean or stop the distribution of CSAM on open datasets that have no central authority, according to research ([link removed] ) from last year.
- While images of CSAM are illegal if they contain real children or if they were trained on data with real children, 100% synthetically-made images that do not contain real source images could be protected as free speech, according to a new report by Stanford University ([link removed] ) .
- The Stanford report also found that CyberTipline ([link removed] ) , the federal clearinghouse for all reports on child sexual abuse material, which is run by the National Center for Missing and Exploited Children, is overwhelmed and unprepared to handle the amount of AI-generated CSAM.

// It will take all of us

Commitments and actions are a step in the right direction, but they’re just one dimension of a more comprehensive solution that likely requires federal regulation to ensure the safety of today’s tech ecosystem. There is reason to believe that today’s breaking point might become tomorrow’s tipping point ([link removed] ) .

Polgar said, “This is an issue that relies on people coming together across civil society, government, industry, and academia. So instead of playing Whac-A-Mole or searching for an elusive silver bullet, we need to conceive of complex tech & society issues like a Rubik’s Cube. It’s hard, it’s connected, but it’s solvable with enough hard work.”

Project Liberty in the news

// MSNBC’s Chuck Todd featured “Our Biggest Fight” in his article about the race to build a better internet. Read here ([link removed] ) .

Other notable headlines

// 🇪🇺 The EU is investigating Meta over the spread of disinformation on its platforms ahead of the EU’s elections in June, according to an article in The New York Times ([link removed] ) .

// 📱 An article in The New Yorker ([link removed] ?) asks: How do we hold on to what matters in the age of distraction?

// 🤖 Teens are making friends with AI chatbots. But what happens when AI advice goes too far? An article in The Verge ([link removed] ) explored the benefits and costs when your first call is to an algorithm.

// 🏛 An article in WIRED ([link removed] ) profiled Arati Prabhakar, the woman who introduced ChatGPT to President Biden and has the ear of the president on all things AI.

// 🚗 An investigation by The Markup ([link removed] ) found that car tracking can enable domestic abuse. Internet-connected cars allow abusers to track domestic violence survivors.

// 💵 Videos on TikTok about the economy and consumerism are rewiring the brains of Gen Z and creating cases of ‘money dysmorphia,’ according to an article in The Wall Street Journal ([link removed] ) .

// 🇮🇳 AI companies are making millions producing election content in India, according to an article in Rest of the World ([link removed] ) .

Partner news & opportunities

// Virtual & in-person event on The Anxious Generation

May 20th at 2-3pm ET

The Sustainable Media Center ([link removed] ) is hosting an in-person and virtual webinar with Jonathan Haidt, Scott Galloway, and Emma Lembke for a discussion of Haidt’s book The Anxious Generation. The three will discuss the underlying causes and potential solutions for the rising levels of anxiety among young people. Register here ([link removed] ) .

// Road trip to responsible tech

The Siegel Family Endowment ([link removed] ) and Roadtrip Nation ([link removed] ) is hosting a country-wide road trip for three young people to explore the exciting world of public interest technology. Apply by June 2nd ([link removed] ) .

What did you think of today's newsletter?

We'd love to hear what you thought of today's newsletter. Reply to this email with:

- Feedback for how we can make this newsletter better
- Ideas for future editions
- A recommendation of someone we should interview

/ Project Liberty is advancing responsible development of the internet, designed and governed for the common good. /

Thank you for reading.

Facebook ([link removed] )

LinkedIn ([link removed] )

Sin título-3_Mesa de trabajo 1 ([link removed] )

Instagram ([link removed] )

logo foter rojo ([link removed] )

501 W 30th Street, Suite 40A,

New York, New York, 10001

Unsubscribe ([link removed] ) Manage Preferences ([link removed] )

© 2023 Project Liberty
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a