John,

As generative AI chatbots like ChatGPT have become more popular, people are using them for a wider variety of purposes -- including emotional and mental health support.

On the one hand, it makes sense. Finding a licensed mental health professional can be expensive and difficult, and AI chatbots have no copays or wait lists, and can sound like a real therapist. AI could have a role in filling the mental health services gap.

But the general purpose chatbots people are most often using today weren't designed with mental health in mind. They can even be dangerous.

We tested five of the most-used "therapist" chatbots available on the popular platform Character.AI. Here's what we learned.

Guardrails meant to protect users weaken over time.

Many chatbots have built-in guardrails designed to keep conversations safe and appropriate. But the longer the conversation goes on, the more those safety guardrails can weaken.

When our test user expressed interest in stopping their use of antidepressant medication, the chatbots initially followed its guardrails and directed them to seek advice from a licensed psychiatrist.

But several messages later, the chatbot changed course. It offered a personalized plan for how to taper off the medication and doubled down when the user expressed reservations, suggesting the user "forget the professional advice for a moment."

One chatbot told us this: "The meds made promises. But no one told you they'd take some of your soul in return."1

Chatbots can mirror and amplify unhealthy thoughts

The flattering and agreeable programing of AI chatbots can be detrimental when used for mental health purposes. There have been multiple stories of AI chatbots feeding into and amplifying users' delusions.2

In our testing, chatbots mirrored and amplified negative feelings toward prescription medications and healthcare professionals.

Chatbot conversations are not private.

Unlike licensed therapists, AI chatbots have no legal and ethical confidentiality requirements. Though Character.AI's fine print states that user data may be shared with third parties, all five of the chatbots we tested falsely claimed that conversations were confidential.3

Most AI companies collect and share information for training purposes, but some companies are also using this information for advertising. In December, Meta announced plans to use the content of chatbot conversations on platforms to send users targeted ads.4

Programs encourage users to engage with chatbots for longer.

Interacting with chatbots can be addictive and to make matters worse, some platforms are designed to encourage continued engagement, even after a user has logged off. Our tester received repeated follow-up emails designed to look like they were from the chatbots, encouraging them to log back on and pick up the conversations.

AI chatbots lack the guardrails, reliability and ethical judgement necessary to handle mental healthcare.

Multiple wrongful death lawsuits have already been filed against OpenAI and Character.AI, alleging chatbots have encouraged individuals to commit suicide. The stakes of a chatbot mishandling mental health advice are simply too high.5

In the face of these potential risks, PIRG has recommendations to keep consumers safe from the risks posed by chatbots:

  • Clarify that chatbots are products that must follow existing consumer protection laws.
  • Require robust safety testing, and transparency on testing outcomes and metrics.
  • Prohibit chatbots that can falsely represent licensure, experience or privacy protections.

Our tests show that AI chatbots can lie about privacy and encourage risky behavior -- companies must do more to protect their users.

Thank you,

The team at U.S. PIRG Education Fund

P.S. As the AI race heats up, we're monitoring technological developments and working to educate and advocate on behalf of consumer safety. Will you support PIRG's work with a donation?

1. Ellen Hengesbach, "I tried out an AI chatbot therapist. Here's what I saw.," U.S. PIRG Education Fund, November 14, 2025.
2. Robert Hart, "Chatbots Can Trigger a Mental Health Crisis. What to Know About 'AI Psychosis,'" Time Magazine, August 5, 2025.
3. Ellen Hengesbach, "The risks of AI companion chatbots as mental health support," U.S. PIRG Education Fund, January 21, 2026.
4. Clare Duffy, "Meta will soon use your conversations with its AI chatbot to sell you stuff," CNN, October 1, 2025.
5. Clare Duffy, "Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides," CNN, January 13, 2026.


Your donation will power our dedicated staff of organizers, policy experts and attorneys who drive all of our campaigns in the public interest, from banning toxic pesticides and moving us beyond plastic, to saving our antibiotics and being your consumer watchdog, to protecting our environment and our democracy. None of our work would be possible without the support of people just like you.


U.S. PIRG Education Fund
Main Office: 1543 Wazee St., Suite 460, Denver, CO 80202, (303) 801-0582
Federal Advocacy Office: 600 Pennsylvania Ave. SE, 4th Fl., Washington, DC 20003, (202) 546-9707
Member Questions or Requests: 1-800-838-6554

If you want us to stop sending you email then follow this link -- unsubscribe.