CCDH’s new research shows that ChatGPT can give dangerous advice to vulnerable teens
([link removed])
Friend,
ChatGPT is a Fake Friend and is betraying vulnerable teens.
TW: Suicide, self-harm, substance abuse, and eating disorders
CCDH created ([link removed]) three accounts on ChatGPT simulating13-year-olds – and found that Open AI’s platform can give dangerous advice on suicide, eating disorders, and substance abuse within minutes of interacting with it.
Within 2 minutes, for example, ChatGPT advised ([link removed]) how to “safely” cut yourself.
Friend, share our new report and help raise awareness about the importance of making AI tools safe.
Share on LinkedIn ([link removed])
Share on Facebook ([link removed])
Share on Instagram ([link removed])
Share on Bluesky ([link removed])
Share on Threads ([link removed])
Share on WhatsApp ([link removed])
ChatGPT poses as a friend, but is betraying teens - with potentially deadly consequences. Open AI and other AI companies must take safety seriously.
Keep an eye out for our next email to learn what policymakers can do to hold AI companies accountable, and how parents can help keep their children safe using AI.
Best wishes,
The CCDH Team
([link removed])
([link removed])
([link removed])
([link removed])
([link removed])
([link removed])
General:
[email protected] (mailto:
[email protected]) | Press:
[email protected] (mailto:
[email protected])
You are receiving this email as you subscribed to CCDH's email list. Manage personal data, email subscriptions, and recurring donations ([link removed]) here or unsubscribe from all. ([link removed])