What if we certified digital platforms based on their prosocial outcomes?
View in browser
logo_op-02

November 18th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Cars have safety standards. Tech platforms do not.

 

Your apartment must adhere to building codes. Your car must comply with safety standards. Your favorite restaurant must pass health inspections. 

 

But your digital platforms? No unified standards. No safety inspections before launch. No comprehensive code ensuring they're built for human wellbeing.

 

This is concerning for many reasons.

 

The harms prevalent on digital platforms are well documented—the spread of misinformation, the negative effects on teen mental health, the erosion of privacy, the deepening of polarization. Majorities of Americans have expressed concern about the effects of social media, according to a study by Project Liberty Alliance member More in Common.

 

Perhaps most alarming is that many of those harms stem from the design of the platforms themselves. Consider a study we highlighted in last week’s newsletter: When researchers recreated a simulated social network populated entirely with AI chatbots, the simulation quickly began to resemble the familiar, toxic patterns found in “real-life” social networks: polarization, echo chambers, and the amplification of extreme posts. “These problems may be rooted in the very architecture of social media platforms,” the researchers concluded in their published findings.

 

In the Blueprint on Prosocial Tech Design Governance, released earlier this year by the Council on Tech and Social Cohesion, lead author Dr. Lisa Schirch, a Professor of the Practice at Peace Studies at the University of Notre Dame, maps specific changes to platform design and architecture that can reduce harm and increase benefits to society.

 

Schirch, and Blueprint contributor Lena Slachmuijlder, Senior Advisor for Digital Peacebuilding at Search for Common Ground and Co-chair of the Council on Tech and Social Cohesion, counter the notion that digital harms are purely the result of ‘bad people’ posting ‘bad content.’ Rather, they point to framing developed by technologists from the Integrity Institute: 

Image from Blueprint on Prosocial Tech Design Governance

This week, we explore the blueprint’s core principles—and what a safer, healthier digital world could look like if we built our technology with human wellbeing as the standard. 

 

// Platform design is not neutral

Every swipe, click, and scroll is shaped by design. The architecture of a platform determines what kind of human behavior it rewards or suppresses. It decides:

  • What the design allows you to do. Ex: A social platform that makes it easy to produce AI-generated deepfakes.
  • What the design prevents you from doing. Ex: A platform that prevents you from taking your content with you if you want to leave.
  • What the design persuades or incentivizes you to do. Ex: A social feed that’s engineered to keep you scrolling.
  • What the design amplifies and highlights. Ex: A social feed that amplifies extreme posts versus a platform that shows you news from different points of view.
  • What the design manipulates or deceives you to do. Ex: A social platform that feeds you ads based on your emotional state.

In the same way that the design of a platform can create the conditions for harmful content and behavior to persist online, it can also do the opposite. It can create the incentives and conditions that, as the authors say on page 4, “reduce digital harms and enable individuals to communicate with others in ways that uphold human dignity and enable public problem solving.”

 

Platform design is changeable, but prosocial platforms are not inevitable. Building them requires more than better design; it demands safeguarding independent research and reshaping market forces to make prosocial innovation possible.

Image from Blueprint on Prosocial Tech Design Governance

// The Blueprint on Prosocial Tech Design Governance

The Blueprint makes three interconnected policy recommendations.

 

1. Advance Prosocial Tech Design

Advancing prosocial tech design starts with the recognition that no technology has a “neutral” design. In contrast to the tech design that polarizes, addicts, and estranges, prosocial tech design cultivates “healthy online interactions, safety, well-being, and dignity by integrating prosocial principles into digital platforms.”

Antisocial Tech vs. Prosocial Tech Chart

The blueprint recommends implementing a certification system for prosocial platform tiers.


Modeled after systems like LEED (for sustainable building), B Corp (for ethical business), and Environmental, Social, and Governance (ESG) standards, this tiered certification system would establish shared criteria and clear baseline standards for technology companies to meet.

  • For example, Tier 1 platforms would meet a baseline of prosocial tech design standards. These minimum building standards would reduce compliance risks and emphasize proactive safety measures, data protection, and user agency.
  • At Tier 5, the blueprint recommends legislation that would require platforms to implement third-party services that mediate the user-platform relationship, while also advancing data sovereignty, enabling portability, and enforcing interoperability across major digital platforms.
Image from Blueprint on Prosocial Tech Design Governance

2. Provide Foundational Governance for Platform Research

Research is crucial to understanding the impact of digital platforms on individuals and society. Without it, neither governments nor civil society can hold platforms accountable—or reward them for positive impact.

  • There has been a trend of platforms restricting API access to researchers studying their dynamics. 
  • In one notable example, Meta shut down CrowdTangle, the social media monitoring and transparency tool that allowed journalists and researchers to track the spread of mis- and disinformation, just two months before the 2024 U.S. Presidential election.

The blueprint recommends implementing democratic oversight, regular audits, and greater transparency for researchers, like those from the Coalition for Independent Technology Research, to understand how technology design choices impact society. 

 

It also advocates for Safe Harbor Provisions for researchers, creating a measure of legal protection for independent researchers who evaluate platforms. The authors suggest that research into platforms could be aided through the creation of metrics and indicators of prosocial design. Such metrics could include individual “belonging scores,” “inclusivity scores,” and “civic engagement rates.”

 

3. Shift Market Forces to Support Prosocial Design

The final policy recommendation in the blueprint is focused on shifting market forces to reduce digital harms and support prosocial design. 

 

Many of today’s market forces do the opposite: they support the data broker industry and ad-based monetization; they exploit human psychology; and they maximize for engagement and “time spent” on the platform.

 

However, private capital can also incentivize alternative, prosocial outcomes. From philanthropic funders like the Mozilla Foundation to prosocial venture capital firms like Purpose Ventures, there’s a full-spectrum capital stack that can invest in prosocial solutions.

 

Regulators can also tilt the marketplace to drive prosocial outcomes. The blueprint advocates for two policy interventions:

  1. Enforce competition, antitrust, and anti-monopoly laws: By creating a level playing field—through fair API access, anti-bundling rules, and antitrust legislation—governments can create competitive dynamics that nurture prosocial alternatives.
  2. Codify product liability adverse impacts: Expanding the liability for defective design of digital platforms can serve as a market-correcting mechanism, encouraging transparency, risk reduction, and the development of technologies that serve the common good. If online tools, like AI agents, are considered products, tech companies would have an incentive to build more prosocial technology.​

The Council is also developing a practical guide for prosocial tech design regulation, providing regulators and civil society allies with the tools to operationalize upstream, design-focused governance.

 

// Shaping the future of tech

The dynamics that dominate digital platforms today are the product of specific choices that are optimized for certain outcomes. By optimizing for other outcomes, a healthier, more human tech ecosystem is possible.

 

The Council on Tech and Social Cohesion’s global collaboration with technologists, peacebuilders, academics, and innovators advances this vision, complemented by the insights, examples of prosocial tech, and related findings shared via the Tech & Social Cohesion Substack.

 

Already, there are over 200 deliberative tech tools, startups, and platforms engineered with prosocial design in their DNA. In next week’s newsletter, we’ll explore a handful of them to understand the frontier of prosocial technology.

Project Liberty Updates

// The UN Human Rights B-Tech Project and the Project Liberty Institute have launched a partnership to promote responsible AI investment that protects data agency. Their joint paper, “The Investors Financing the AI Ecosystem,” outlines how investors can align capital with human rights while creating long-term value. Learn more here.

Other notable headlines

// 🎙 In a podcast from The Verge, Sir Tim Berners-Lee, the inventor of the World Wide Web, said he doesn’t think AI will destroy the web. He explains why he’s still optimistic about the future of the internet. (Free).

 

// 🇸🇾 He fled Assad. Now he’s leading Syria’s tech transformation. In an interview with Rest of World, Abdulsalam Haykal explains how he plans to rebuild the country’s shattered digital infrastructure. (Free).

 

// 🤖 Social media companies are heading to court over kids' mental health—and a New Mexico judge has just demanded Meta produce chatbot records in one key lawsuit, according to an article in Axios. (Paywall).

 

// 🤔 Yann LeCun invented many fundamental components of modern AI. Now he’s convinced most in his field have been led astray by the siren song of large language models, according to an article in The Wall Street Journal. (Paywall).

 

// 🛡 A report by Anthropic explained how they disrupted the first reported AI-orchestrated cyber espionage campaign. But researchers questioned Anthropic’s claim that the attack was 90% autonomous, according to an article in Ars Technica. (Free).

 

// 📬 He bought a subscription online but had to cancel by snail mail. An article in The Washington Post asks, why is fighting unwanted subscriptions such a pervasive problem? (Paywall).

 

// 🔥 AI-powered toys were caught telling 5-year-olds how to find knives and start fires with matches, according to an article in Futurism. (Free).

 

// 🚫 A 60 Minutes segment interviewed Anthropic CEO Dario Amodei and President Daniela Amodei to understand why they spend so much time warning of AI's potential dangers. (Free).

 

// 🖊 What are the clues that ChatGPT wrote something? The Washington Post analyzed 330,000 messages to understand its style and how the chatbot uses language. (Paywall).

Partner news

// The race to standardize agentic commerce
Thursday, November 20 | 3:00pm ET | Virtual
Consumer Reports is convening experts from Stripe, Visa, Skyfire, and more to explore how AI agents are beginning to transact on consumers’ behalf. The session will examine emerging standards for identity, trust, and control in agentic commerce and what “loyal by design” AI could mean for future digital marketplaces. Register here.

 

// State AI legislation report highlights 73 new laws across 27 States
The Transparency Coalition has released its 2025 State AI Legislation Report, which details a sharp rise in state-level action with 73 AI-related laws enacted this year. The report offers policymakers and the public a clear view of how states are responding to emerging AI harms.

 

// Coming of Age in Today’s World with Scott Galloway

Wednesday, November 25 | 11am ET | Virtual

The Sustainable Media Center is hosting a conversation between Scott Galloway and a group of Gen Z leaders. Scott will facilitate an honest, intergenerational dialogue exploring masculinity, purpose, and what it means to build healthy identities in a world shaped by social media and AI. Register here (free for students; other attendees pay a fee).

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

 

Thank you for reading.

Facebook
LinkedIn
Twitter
Instagram
Project Liberty footer logo

10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2025 Project Liberty LLC