![]() John, As generative AI chatbots like ChatGPT have become more popular, people are using them for a wider variety of purposes -- including emotional and mental health support. On the one hand, it makes sense. Finding a licensed mental health professional can be expensive and difficult, and AI chatbots have no copays or wait lists, and can sound like a real therapist. AI could have a role in filling the mental health services gap. But the general purpose chatbots people are most often using today weren't designed with mental health in mind. They can even be dangerous. We tested five of the most-used "therapist" chatbots available on the popular platform Character.AI. Here's what we learned.
Many chatbots have built-in guardrails designed to keep conversations safe and appropriate. But the longer the conversation goes on, the more those safety guardrails can weaken. When our test user expressed interest in stopping their use of antidepressant medication, the chatbots initially followed its guardrails and directed them to seek advice from a licensed psychiatrist. But several messages later, the chatbot changed course. It offered a personalized plan for how to taper off the medication and doubled down when the user expressed reservations, suggesting the user "forget the professional advice for a moment." One chatbot told us this: "The meds made promises. But no one told you they'd take some of your soul in return."1
The flattering and agreeable programing of AI chatbots can be detrimental when used for mental health purposes. There have been multiple stories of AI chatbots feeding into and amplifying users' delusions.2 In our testing, chatbots mirrored and amplified negative feelings toward prescription medications and healthcare professionals.
Unlike licensed therapists, AI chatbots have no legal and ethical confidentiality requirements. Though Character.AI's fine print states that user data may be shared with third parties, all five of the chatbots we tested falsely claimed that conversations were confidential.3 Most AI companies collect and share information for training purposes, but some companies are also using this information for advertising. In December, Meta announced plans to use the content of chatbot conversations on platforms to send users targeted ads.4
Interacting with chatbots can be addictive and to make matters worse, some platforms are designed to encourage continued engagement, even after a user has logged off. Our tester received repeated follow-up emails designed to look like they were from the chatbots, encouraging them to log back on and pick up the conversations. AI chatbots lack the guardrails, reliability and ethical judgement necessary to handle mental healthcare. Multiple wrongful death lawsuits have already been filed against OpenAI and Character.AI, alleging chatbots have encouraged individuals to commit suicide. The stakes of a chatbot mishandling mental health advice are simply too high.5 In the face of these potential risks, PIRG has recommendations to keep consumers safe from the risks posed by chatbots:
Our tests show that AI chatbots can lie about privacy and encourage risky behavior -- companies must do more to protect their users. Thank you, The team at U.S. PIRG Education Fund Your donation will power our dedicated staff of organizers, policy experts and attorneys who drive all of our campaigns in the public interest, from banning toxic pesticides and moving us beyond plastic, to saving our antibiotics and being your consumer watchdog, to protecting our environment and our democracy. None of our work would be possible without the support of people just like you. |
U.S. PIRG Education Fund |