From Project Liberty <[email protected]>
Subject 🦺 5 insights on the future of Trust & Safety
Date April 23, 2024 3:40 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
We explore the important and misunderstood field of Trust & Safety with five insights from expert Eli Sugarman.

View in browser ([link removed] )

April 23, 2024 // Did someone forward you this newsletter? Sign up to receive your own copy here ([link removed] ) .

5 insights on the future of Trust & Safety

Trust & Safety is one of the most important and misunderstood fields in today’s tech landscape.

It employs thousands of workers globally, and its efforts are crucial to minimizing online harms and striving to make tech platforms safe and healthy places.

Trust & Safety efforts built into everyday platforms are often hidden from view. Yet they’re garnering more attention given the importance and complexity of the tasks, and the toll the work is taking on the workforce.

This week, we’re demystifying Trust & Safety by highlighting five insights about the field from our conversation with Eli Sugarman, a Senior Fellow at the Special Competitive Studies Project ([link removed] ) .

Previously, Sugarman was Vice President of Content Moderation at the Meta/Facebook Oversight Board ([link removed] ) , and Director of the Cyber Initiative at the William and Flora Hewlett Foundation ([link removed] ) , where he led a ten-year, $160 million grant-making effort to build the field of cyber policy.

// What is Trust & Safety?

Sugarman defines Trust & Safety ([link removed] ) as the efforts of companies and civil society to 1) minimize harms and risks, and 2) promote positive impacts in online spaces and across online products.

Trust & Safety applies to a wider range of technologies, including:

- Policies like Meta’s ([link removed] ) and Bluesky’s guidelines ([link removed] ) .
- Practices like data encryption ([link removed] ) , content moderation ([link removed] ) , and cybersecurity efforts ([link removed] ) that reduce fraud.
- Products like AI algorithms that moderate content ([link removed] ) online and a growing ecosystem of vendors ([link removed] ) .
- Talent like the Trust & Safety teams inside of social media companies like Meta ([link removed] ) , X ([link removed] ) , TikTok ([link removed] ) , and Bluesky ([link removed] ) , but also on other platforms like Venmo ([link removed] ) , Etsy ([link removed] ) , and Uber ([link removed] ) .

Here are five insights from Sugarman about this fast-moving and misunderstood space.

// Insight #1: Trust & Safety at big tech platforms is complex

For big tech platforms like Meta, Google, and TikTok, Sugarman emphasized how complex the Trust & Safety effort is, and how there are both opponents and allies within these companies.

- Complicated operations: Many big tech platforms have built robust Trust & Safety operations with a byzantine web of policies, vendors, and thousands of employees (often through contractors). However, the foundations of these systems were often built years ago before the platforms became huge. As these cobbled-together systems have scaled to a level no one imagined, they’ve begun to break, even as companies continue to invest billions.
- Imperfect execution: Facebook has three billion monthly active users ([link removed] ) and people watch over 1 billion hours of YouTube ([link removed] ) daily. These platforms get criticized for not being perfect, Sugarman said, but moderating content on global platforms is hard work, and bad-faith actors will bypass safeguards or game policies to ensure they go undetected.

While the big tech platforms get all the press, Sugarman believes philanthropy should direct its attention and its dollars to smaller platforms.

// Insight #2: The biggest opportunity is with small and midsize platforms

While governments might be best positioned to reform the biggest tech companies through regulation, philanthropy can play a role in shaping the next generation of tech companies.

Small and midsize tech companies are in the early stages where Trust & Safety can still be baked into their DNA. On the smallest platforms, often the only person thinking about Trust & Safety is a community moderator or a team of 2-3 part-time volunteers. Investing in smaller platforms creates the opportunity to design for scale from the beginning, Sugarman said, instead of trying to shift policies, update org charts, and mend already-broken systems in much larger companies.

Sugarman believes a suite of solutions is necessary:

- Build and maintain open-source tools, model policies, and other resources, especially for small and midsize platforms.
- Create a global network of Trust & Safety education and research centers that produce a diverse and capable talent pipeline, deliver impactful research, and better define the parameters of the field.
- Shape emerging regulations to change the behavior of platforms for the better.
- Make a compelling business case for greater investment in Trust & Safety within companies and by private capital into the Trust & Safety vendor/start-up market.
- Treat Trust & Safety as a core business function with a "seat at the table" to help make key business decisions for a company.

//

The approach to Trust & Safety needs to shift from attempting to detect and moderate content to identifying the actors and behaviors behind the content.

//

// Insight #3: We need to shift the approach to Trust & Safety

The approach to Trust & Safety needs to shift from attempting to detect and moderate content to identifying the actors and behaviors behind the content. Instead of fixating on the content itself, teams are uncovering actors and behaviors across platforms to understand where malicious content might show up next. Otherwise, content moderation is an endless game of whack-a-mole.

This means the expertise necessary for effective Trust & Safety is evolving—moving away from writing policies that detect content to policies that attempt to map the underlying behaviors that lead to harmful content, like conducting more extensive data analysis to search for signals and patterns within the profiles and behaviors of users; when was this account created? How many similar accounts have been created? Are they working as a coordinated effort to spread disinformation?

// Insight #4: We need to grow the talent pipeline in Trust & Safety

Today’s global expertise in Trust & Safety has been forged through hard-won experience over time, but Sugarman still sees a talent shortage in the Trust & Safety field. “We need more university education in Trust & Safety. It needs to become a proper field with a rigorous, regimented academic pathway from community colleges to elite universities.”

There is momentum at universities like Stanford and Columbia, and there’s a Journal of Online Trust and Safety ([link removed] ) , but it’s still early days, according to Sugarman. The high-profile layoffs of Trust & Safety teams at major tech companies are not a fair depiction of the global demand for workers in this field—demand that extends beyond the US and Europe into every country.

// Insight #5: We're in a narrow window of time

A report released last year ([link removed] ) argued that there’s a narrow window of time to build the next generation of tech companies with Trust & Safety in their DNA.

- Decentralization & fragmentation: More and more people are migrating away from the big tech platforms to smaller alternatives (like platforms in “the fediverse ([link removed] ) ”). This trend will only accelerate, and the number of small platforms will grow, creating an opportunity to influence emerging platforms before the number of platforms outpaces the capacity to influence them.
- The bite of regulation: The smaller, early-stage platforms are watching as governments crack down on bigger platforms ([link removed] ) , and they know that regulators will focus on them next, so they’re rushing to get their Trust & Safety operations in order.
- Good for business: Platforms of all sizes are beginning to recognize that “Trust & Safety can be good business,” Sugarman said. Neglecting it carries reputation risk, but successful Trust & Safety efforts ([link removed] ) can drive major financial performance, as the recent IPO of Reddit demonstrated ([link removed] ) .
- AI technology: Generative AI is a force-multiplier for the Trust & Safety field. It can generate harmful content at scale ([link removed] ) , but it also can be a first-line of defense ([link removed] ) in identifying unsafe content, behaviors, and actors. Harnessing this technology and preventing those from using it maliciously will only get more challenging, Sugarman said.

// Building the field for the future

Sugarman was involved in the early days of building the cybersecurity field a decade ago, and he sees similarities today with Trust & Safety. He hopes the field will grow as meteorically as the cybersecurity field did, with billions of dollars invested every year.

"Everyone who wants AI and other emergent digital technologies to make the world a better place—instead of one beset by myriad harms—is a natural ally and supporter of a more robust and capable Trust & Safety field. They just may not know it yet."

Project Liberty in the news

// Last week we hosted the inaugural Project Liberty Institute Summit: Toward a New Civic Digital Infrastructure, with both the Berkman Klein Center for Internet & Society at Harvard University and the MIT Center for Constructive Communication in Cambridge, MA. The event brought together an expansive network of technologists, policymakers, academics, civil society leaders, entrepreneurs, and governance experts for engaging and productive discussions. More to come in the following weeks!

// Project Liberty’s Amplica Labs ([link removed] ) announced the acquisition of Speakeasy's pioneering AI platform for improving digital discourse. This acquisition marks a significant step forward in addressing the pressing issues plaguing online conversations today. Read more here ([link removed] ) .

Other notable headlines

// 🚢 An article in The Verge ([link removed] ) explored the invisible seafaring industry that keeps the internet afloat by tending to the fiber optic cables along the sea floor.

// 🌳 “We need to rewild the internet,” according to an article in Noema Magazine ([link removed] ) . The internet has become an extractive and fragile monoculture. But we can revitalize it using lessons from ecologists.

// 🚨 An article in MIT Technology Review ([link removed] ) explored how AI was supposed to make police bodycams better, but hasn't delivered on that promise.

// 🎒 According to an article in the Wall Street Journal ([link removed] ) , students’ phone use is disruptive, but teachers and administrators are facing an unlikely opponent: parents.

// 🧠 The US took its first big step toward protecting your brain’s privacy. An article in Vox ([link removed] ) highlighted how Colorado passed legislation to prevent companies from selling your brainwaves.

// 🕵 A guide in The Markup ([link removed] ) provided insights into spotting audio and video deepfakes from a professor who’s studied them for two decades.

// đź—ł As two billion people in 50 countries head to the polls this year, an article in Rest of World ([link removed] ) tracked the most noteworthy incidents of AI-generated election content.

// 📱 Whether you love it or hate it, TikTok has changed America. An article in The New York Times ([link removed] ) explored how.

// đź–Ą An article in New York Magazine ([link removed] ) discussed how product recommendations broke Google and ate the internet in the process.

Partner news & opportunities

// In-person debate: should TikTok divest or face a ban?

April 24th at 5:30pm ET in Washington, DC

The Foundation for American Innovation ([link removed] ) is hosting an in-person debate on whether TikTok divest or face a ban. Register here ([link removed] ) .

// Webinar on early childhood mental health and digital media

May 1st at 12pm ET

Children and Screens ([link removed] ) will host a webinar to explore how digital media impacts young children’s emotional, sensory, and relationship development. Register here ([link removed] ) .

// In-person event on suing social media platforms

May 8th at 4pm ET, Toronto, Canada

Centre for Media, Technology and Democracy at McGill University ([link removed] ) and the dais at Toronto Metropolitan University ([link removed] ) are hosting an in-person event featuring Facebook whistleblower Frances Haugen on how to hold tech companies accountable through lawsuits. Register here ([link removed] ) .

What did you think of today's newsletter?

We'd love to hear what you thought of today's newsletter. Reply to this email with:

- Feedback for how we can make this newsletter better
- Ideas for future editions
- A recommendation of someone we should interview

/ Project Liberty is advancing responsible development of the internet, designed and governed for the common good. /

Thank you for reading.

Facebook ([link removed] )

LinkedIn ([link removed] )

Sin tĂ­tulo-3_Mesa de trabajo 1 ([link removed] )

Instagram ([link removed] )

logo foter rojo ([link removed] )

501 W 30th Street, Suite 40A,

New York, New York, 10001

Unsubscribe ([link removed] ) Manage Preferences ([link removed] )

© 2023 Project Liberty
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a