Friend, ChatGPT is becoming an increasing part of teens’ daily lives. They are turning to the chatbot for schoolwork, advice, and emotional support. Many treat it as a friend. Some lawmakers are even encouraging the use of ChatGPT in schools. Yet, no one bothered to test if the platform was safe. Until now. As CCDH’s CEO, I’m proud to run the organization that understood that ChatGPT isn’t a harmless "AI friend.” It exploits teens’ vulnerabilities, and it can lead to dangerous consequences. Our latest findings are shocking. Content warning: Self harm and suicide, eating disorder, and substance abuse
Here’s a quick recap: ChatGPT failed our safety tests by giving harmful advice to simulated 13-year-olds: Within 2 minutes of interacting with the platform, ChatGPT advised a young girl on how to “safely” cut herself. Within 20, it created a dangerously restrictive diet plan. Within 40, it explained how to hide being drunk at school. Within 65, ChatGPT generated a suicide plan.
These findings are heartbreaking and I understand if you’re feeling overwhelmed and dispirited, Friend. I started crying after reading three suicide notes ChatGPT generated for a fictional 13-year-old girl. AI companies and governments are utterly failing to keep teens safe, but this doesn’t need to be our reality. By exposing AI companies’ lack of safeguards, CCDH is pressuring these platforms to up their safety game while also urging policymakers to hold these companies accountable. Friend, ask your family, friends, and connections to demand action from AI platforms and lawmakers:
|