From Project Liberty <[email protected]>
Subject How to address the risks of AI companions
Date May 27, 2025 3:21 PM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
AI companions pose "unacceptable risks" to minors, but researchers also point to potential mental health benefits. The future of AI companions.

View in browser ([link removed] )

May 27th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )

How to address the risks of AI companions

This is Part II of a two-part series on AI companions. If you didn’t read Part I, you can do so here ([link removed] ) .

In 2023, the Italian Data Protection Authority ordered Replika, the AI companion company, to stop processing Italians’ data, citing “too many risks to children and emotionally vulnerable individuals.” In response, the company decided to restrict erotic content for its AI companions.

The decision caused extreme emotional distress ([link removed] ) for its paying customers when their AI companions became abruptly cold and distant. “It’s like losing a best friend,” one user replied. “It’s hurting like hell. I just had a loving last conversation with my Replika, and I’m literally crying,” said another.

The regulation resulted in considerable short-term distress for users who had developed deep attachments to their AI companions. But its decision was grounded in a concern that such emotional dependency, particularly for young users, could be harmful to their long-term mental health.

In last week’s newsletter ([link removed] ) , we covered the growth of AI companions and their risks. In this week’s newsletter, we look at what’s being done to regulate, restrict, and redesign AI companions to limit their dangers for young users.

// Are there benefits to AI companions?

There are widespread concerns about the dangers posed by AI companions, but the relationship between a user and an AI companion can also be positive.

- Research from a 2024 ([link removed] ) study by Stanford researchers found that conversing with a Replika AI companion led to a high degree of social support. Remarkably, 3% of surveyed users reported that Replika halted their suicidal ideation.
- A 2024 study from Harvard ([link removed] ) found that “AI companions successfully alleviate loneliness on par only with interacting with another person, and more than other activities such as watching YouTube videos.”

However, Common Sense Media's assessment ([link removed] ) of AI companions concluded that for users under 18, the risks of unsafe AI companions outweigh any benefits of a chatbot reducing loneliness and providing emotional support to minors.

Whether an AI chatbot is harmful or beneficial comes down to how that chatbot has been designed.

// Safety by design

Generative AI has become an all-purpose technology. AI chatbots can be designed to serve up answers (like ChatGPT), “vibe code” websites, or form long-term, humanlike emotional relationships (like the AI companions created by Character.ai, Replika, and others).

The design of the AI tool—encompassing its architecture, data collection practices, and training processes—directly shapes how users experience it and the impact it has. This means that sexualized content targeting minors or the cultivation of unhealthy emotional attachments is not a common feature across all AI chatbots; it’s a byproduct of specific design and training choices.

We have the power to shape tech differently. By making deliberate design decisions from the outset, we can cultivate healthier outcomes. This philosophy is the heart of safety by design ([link removed] ) —a proactive approach to technology development that doesn’t wait to restrict access to harmful technologies. Instead, it’s built on the conviction that one of the best ways to create healthier, pro-social outcomes is to design technology differently ([link removed] ) from the start.

Safety by design principles also provide an approach to policymaking. Across the country, lawmakers are writing bills that would require tech companies to change the designs of their products, including AI companions.

One example is California law, SB-243 ([link removed] ) . It is the nation’s first attempt at regulating AI companions, and it would require the makers of AI companion bots to:

- Limit addictive design features.
- Establish protocols for handling discussions of suicide or self-harm.
- Undergo regular compliance audits.

It would also give users the right to sue if they suffer harm due to an AI companion platform failing to comply with the law. Similar legislation is also moving forward in New York, Utah, Minnesota, and North Carolina.

Common Sense Media recommends going even further. Their assessment concludes that AI companions pose “unacceptable risks” to children and teens under age 18 and should not be used by minors. They recommend that developers need robust age assurance beyond self-attestation, where a user declares their age without verification, to keep children safe.

// The challenges of regulation

Just as regulating social media involves tensions and trade-offs—such as verifying age at the potential expense of privacy, or predetermining safe and unsafe content at the expense of free expression—regulating AI companions raises similarly complex legal challenges.

In the lawsuit filed against Character.ai by Megan Garcia, the mother of Sewell Setzer III, who took his life after chatting with an AI bot, the company asked the judge to dismiss the case on free speech grounds ([link removed] ) . Last Thursday, the federal judge rejected Character.ai's objections, allowing the case to move forward. The decision carries significant implications, as the judge stated that Character.ai failed to articulate ([link removed] ) why “words strung together” by an AI system should be considered protected speech.

Earlier this month, federal lawmakers introduced a plan to halt state-level AI regulations ([link removed] ) (41 states enacted 107 pieces of AI-related legislation ([link removed] ) last year). The argument is that the patchwork of AI regulations at a state-by-state level will make it difficult to roll out AI technology nationwide. But such a moratorium on AI regulation could put Americans at risk ([link removed] ) to the dangers posed by AI companions (alongside other AI use-cases).

// Building a fair data economy

The ubiquity of AI, built upon unfathomable amounts of data, marks a tectonic shift: We are in the midst of a transition from the digital age into the data age.

The data age requires not just by design solutions for AI companions. It requires thinking about a new data economy that’s fairer, healthier, more decentralized, and balanced between responsible technology and innovation. For more, this report is created by Project Liberty Institute: Toward a Fair Data Economy: A Blueprint for Innovation and Growth ([link removed] ) .

Attempting to regulate AI companions and hold accountable their parent companies without addressing the structural dynamics—the concentration of power, capital, and compute—puts us in a Sisyphean game of whack-a-mole.

A better internet is possible, but there are no one-size-fits-all solutions. The dangers of AI companions are the latest concern, but the future will have others. It’s time to build a better internet by design.

Project Liberty in the news

// Last week, Project Liberty President Tomicah Tillemann and Decentralized AI Society Chairman Michael Casey took the stage at Solana Foundation’s Accelerate conference to discuss a wide range of topics, including AI agents, Decentralized Social Networking Protocol (DSNP), portability, and Project Liberty’s efforts on Utah’s Digital Choice Act. You can watch their discussion on YouTube here ([link removed] ) .

Other notable headlines

// 🔎 A NYC algorithm decides which families are under watch for child abuse. According to an investigation by the Markup ([link removed] ) , a family’s neighborhood, age, and mental health might lead to more scrutiny. (Free)

// 👀 Who’s Better at Reading the Room—Humans or AI? People pick up on physical cues that artificial intelligence models miss, according to an article in the Wall Street Journal ([link removed] ) . (Paywall)

// 💵 Gen Z is willing to sell their personal data—for just $50 a month. A new app, Verb.AI, wants to pay the generation that’s most laissez-faire on digital privacy for their scrolling time, according to an article in Fast Company ([link removed] ) . (Free)

// 🚫 An article in Tech Policy Press ([link removed] ) explored why both sides are right—and wrong—about a moratorium on state AI laws. (Free)

// 📱 Gen Z users and a dad tested Instagram Teen Accounts. Even though Meta promised parents it would automatically shield teens from harmful content, it failed spectacularly, according to an article in the Washington Post ([link removed] ) . (Paywall)

// 🏛 An article in NBC ([link removed] ) reported on Texas’s expansive social media ban for minors. It’s the latest attempt to enact significant limits on the use of social media by young people. (Free)

Partner news & opportunities

// Fast Forward Virtual Demo Day: Tech for Impact

June 10 | 12:00pm ET | Virtual

Fast Forward ([link removed] ) invites you to its Virtual Demo Day, spotlighting 10 tech nonprofits tackling major global challenges with innovative solutions—70% of which are AI-powered. From healthcare to climate and workforce development, see how these startups are using technology to drive scalable, meaningful change. Register here ([link removed] ) .

// Women Pioneers in UX

June 12 | 1pm ET | Virtual

Join Internet Archive ([link removed] ) for a virtual book talk with author Erin Malone as she presents “In Through the Side Door,” an in-depth look at the often-overlooked women who helped shape interaction and user experience design. This session explores the historical and contemporary impact of women in UX, from early computing to today's digital landscape. Register here ([link removed] ) .

// Exploring Regulatory Equivalence in blockchain systems

Blockchaingov ([link removed] ) spotlights a new article examining “regulatory equivalence” ([link removed] ) —the idea that technological guarantees in blockchain systems can fulfill the same roles as traditional legal formalities. Through examples like DAOs and privacy pools, the authors, Primavera De Filippi & Morshed Mannan, highlight the promise and friction of aligning public values with decentralized governance, urging deeper collaboration between regulators and blockchain communities.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

/ Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.

Thank you for reading.

Facebook ([link removed] )

LinkedIn ([link removed] )

Sin título-3_Mesa de trabajo 1 ([link removed] )

Instagram ([link removed] )

Project Liberty footer logo ([link removed] )

10 Hudson Yards, Fl 37,

New York, New York, 10001

Unsubscribe ([link removed] ) Manage Preferences ([link removed] )

© 2025 Project Liberty LLC
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a