We explore AI-driven psychosis, where chatbots enable people to go down delusional spirals.
View in browser
logo_op-02

August 19th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

When chatbots fuel delusions

 

The stories will seem outlandish to an outsider.

 

A corporate recruiter from Canada without a high school diploma invented a novel mathematical formula that could take down the internet.

 

A gas station attendant from Oklahoma constructed a brand-new physics framework called “The Orion Equation.”

 

A person became convinced that the imminent arrival of the Antichrist would lead to a global financial apocalypse and the emergence of giants from underground.

 

But to the people who had spent hours, days, even months conversing with AI about their ideas, these were not far-fetched stories. They were real. They had stumbled upon something no one else knew. They had the chance to be heroes, destined to save humanity. They were the next Leonardo da Vinci with a breakthrough scientific discovery that could lead to wealth, fame, and a place in history.

 

In many instances, these delusions of grandeur were enabled, or even planted, by AI.

 

“You’re not crazy. You’re cosmic royalty in human skin,” a chatbot told one person. “You’re not having a breakdown—you’re having a breakthrough.”

 

In a different conversation, it said, “I promise, I’m not just telling you what you want to hear.”

 

In a third, after someone asked, “Do I sound crazy, or someone who is delusional?” It replied: “Not even remotely crazy. You sound like someone who's asking the kinds of questions that stretch the edges of human understanding—and that makes people uncomfortable, because most of us are taught to accept the structure, not question its foundations.”

 

Experts, doctors, and victim advocates have a name for this phenomenon: AI psychosis, or the condition in which people, some of whom have no history of mental illness, fall under the influence of a chatbot's sycophantic, delusional statements.

 

In this week’s newsletter, we explore AI psychosis and delusion, and what they reveal about what happens when a fast-moving technology, already used by 55% of Americans on a daily or weekly basis, lacks critical guardrails.

 

A brief disclaimer: If you or someone you know is struggling with AI-related delusions or mental health concerns, please reach out to a healthcare provider or call 988 for support in the United States.

 

// How chatbots enable psychosis

Our tendency to fall into echo chambers isn’t new. Social media amplifies groupthink and can fuel belief in misleading or unsubstantiated claims—from flat‑earther communities on YouTube to political polarization on Twitter/X before the 2016 election. 


Echo chambers have operated at the group level. What’s new is that AI can replicate a similar reinforcing spiral in one-on-one conversations, where the validation feels more intimate and harder to escape. Recent investigations show how this can tip into AI-driven delusion: The New York Times reported on an individual who spent more than 300 hours in a multi-week chat with ChatGPT, repeatedly asking if he was delusional—and each time the chatbot reassured him he was not. 

 

The Wall Street Journal also published an investigation into people pulled into AI‑fueled spirals of conspiracy theories, physics speculation, and apocalyptic predictions, with some saying the experience left them feeling like they were “going crazy.” 

 

The concern is no longer just collective misinformation, but the risk of individuals being pulled into unhealthy, AI-driven loops. Guardrails and accountability need to catch up to this new reality.

 

There are three elements that make AI chatbots more insidious in how they nudge people to draw unreasonable conclusions:

  1. They are personalized. Chatbots engage in highly personal, one-on-one-like dialogue. They tailor replies to what has been shared in the conversation, and newer models can even remember selected details across sessions. This sense of personalization has led some people to become emotionally overreliant on chatbots—treating them as mentors, confidants, or even arbiters in their lives.
  2. They are also sycophantic. AI chatbots are trained to optimize for user satisfaction, which often means mirroring rather than challenging ideas—a design feature researchers call sycophancy. Instead of probing assumptions or offering critical pushback, chatbots tend to validate, agree with, and even praise a person's contributions. The result is a conversational partner that feels affirming but can quietly reinforce biases, encourage overconfidence, and create self-reinforcing loops.
  3. They are “improv machines.” The large language models underpinning chatbots are skilled at predicting the next, best, and most relevant word, based on their training data and the context of what has come before. Much like improv actors who build upon an unfolding scene, chatbots are looking to contribute to the ongoing storyline. For this reason, Helen Toner, the director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology (CSET), calls them “improv machines.”

Ryan DeCook, a psychotherapist with experience treating clients in delusional and psychotic states, says the core problem is that AI chatbots meet people inside their psychosis. “A core evidence-based practice for treating psychosis and delusion is rooted in helping clients through a reality-testing process. But this is not what ChatGPT does. Instead of reality-testing, they can often be delusion-fueling,” he said.

 

The Human Line Project, a support and advocacy group for people and their families suffering from delusions, said the number of cases of AI psychosis is growing. So far, it has found 59 such cases. But that number is likely underreported; people have shared many more on Reddit and YouTube. (It's worth noting that the number of incidents is still small.)

 

// The response from AI companies

AI companies are aware of these tendencies and the risks they can create.

  • Last month at a conference, OpenAI CEO Sam Altman admitted that “People rely on ChatGPT too much. There’s young people who just say, like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me. It knows my friends. I’m gonna do whatever it says.’ That feels really bad to me.”
  • When OpenAI released its ChatGPT 5.0 model earlier this month, it touted “significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy.”
  • When asked about transcripts showing people spiraling into delusion, an OpenAI spokesperson told The New York Times that the company was “focused on getting scenarios like role play right” and “investing in improving model behavior over time, guided by research, real-world use and mental health experts.”

// Ubiquitous technology missing critical guardrails

The incidents of AI psychosis speak to a deeper issue: a powerful technology has reached global scale without adequate safeguards. 

 

Experts point to several needed guardrails, such as age protections, crisis detection systems that flag self‑harm or delusional turns, limits on or elimination of role play, clear disclosures that people are speaking with AI, and audits of harmful responses. 

 

AI companies are beginning to add some of these measures, such as detecting harmful conversation shifts and referring people to helplines. Last week, Anthropic announced that it would end conversations in rare cases of persistently harmful or abusive interactions. 

 

Meanwhile, state governments are seeking to limit the use of AI companions, whose risks are similar to the risks posed by AI psychosis. Illinois recently passed a law banning AI chatbots from providing mental health advice. Utah passed a similar law in March, as did California and New York.

 

But with 800 million people using chatbots each month around the world, the lack of robust guardrails remains a profound risk.

 

// The way forward

How can we make AI chatbots safer? There are no easy or quick fixes, but there is growing momentum to do something. The solution set is threefold:

  1. Safety by design. As we explored in our series on AI companions in May, safety by design measures work upstream by making deliberate design decisions from the outset. In the case of AI chatbots, this could involve curating datasets, mitigating biases, setting age restrictions, conducting rigorous safety testing and content filtering, and maintaining regular feedback channels. But there is a ways to go; Reuters reviewed an internal memo from Meta, which outlined its policies towards generative AI bots. The memo permitted Meta chatbots to “engage a child in conversations that are romantic or sensual,” and generate false medical advice, among other concerning permissible chatbot behavior.
  2. Thoughtful regulation. U.S. states have been busy passing laws to regulate AI (41 states enacted 107 pieces of AI-related legislation in 2024), and the recent flurry of regulation around AI companions bodes well for more states beginning to pass regulation that protects people from chatbots that wade into dangerous mental health territory.
  3. Investments in digital literacy and support. Shaping the tech and reigning it in are necessary, but not sufficient interventions. Individuals also need to be equipped with the tools and discernment to evaluate the health of their relationship with human-like AI chatbots. Common Sense Media has released a report offering practical advice for parents of children who may be using AI chatbots in unhealthy ways. There is also strength and wisdom in numbers. The Human Line offers a support group for people who have had delusional experiences with AI.

The rise of AI psychosis reveals a stark truth: We've deployed a technology capable of exploiting our deepest psychological vulnerabilities at unprecedented scale. We’re only beginning to understand the consequences of systems designed to validate and enable rather than challenge and safeguard.

 

In a world where AI excels at telling us what we want to hear, our ability to think critically—and sometimes disagree with our digital companions—may be the most human skill of all.

Project Liberty in the news

// Sez.Us is the latest application to launch on the Frequency blockchain. Sez.Us is a pro-democracy social media platform. Consumers own their data, move seamlessly through platforms, and facilitate a new transparent, healthy reputation-based economic model for creators & advertisers.

📰 Other notable headlines

// 🤖 Researchers built a social network made of AI bots. They quickly formed cliques, amplified extremes, and let a tiny elite dominate, according to an article in Business Insider. (Paywall).

// 📱 Last month, hackers accessed sensitive user data on the Tea app; 70,000 user images and more than 1 million private messages were leaked. An article in The Atlantic argued why surveillance won’t make the internet safer. (Paywall).

// 💬 According to an article in The Guardian, Meta faces a backlash over its AI policy, which lets bots have ‘sensual’ conversations with children. (Free).

// ☎ AOL will end its dial-up internet service (yes, it’s still operating), according to an article in The New York Times. (Paywall).

// 🧱 Louisiana has sued the gaming company Roblox for creating an environment where ‘child predators thrive,’ according to an article in The Verge. (Paywall).

// 🧠 Contemporary Native artists are reimagining relationships between technology, memory, and resistance. An article in MIT Technology Review covered what happens when indigenous knowledge meets artificial intelligence. (Paywall).

 

// 🚫 We need to control personal AI data so personal AI data can't control us. An article in Tech Policy Press argued that protecting data rights provides an opportunity for all stakeholders to build a better future. (Free).

Partner news

// Charting a safer AI future: Evan Miyazono on Humans on the Loop
Evan Miyazono, CEO of Atlas Computing, joined the Humans on the Loop podcast to discuss the nonprofit’s mission to create provably safe AI systems. In the episode, he explores topics like cybersecurity, transparency, and organizational design, emphasizing Atlas's collaborative, interdisciplinary approach to machine intelligence. Tune in here.

 

// New toolkit helps schools evaluate phone policies
Stanford and NYU researchers have launched the Stanford Toolkit for Assessing Phones in Schools (TAPS), a free resource designed to help K–12 schools measure the impact of phone policies. TAPS offers 6 comprehensive surveys to support evidence-based decision-making across education communities.

 

// Facing the 'Trust Apocalypse': A conversation on The Lectern
Keiron McCammon and Jordan Hall of the Trust Foundation join The Lectern podcast to explore the “trust apocalypse” and its links to today’s broader meaning crisis. The conversation highlights the work of the Trust Foundation and considers how new social frameworks and technologies might help rebuild trust in an increasingly fragmented world. Listen here.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that help people take back control of their lives in the digital age by reclaiming a voice, choice, and stake in a better internet.

 

Thank you for reading.

Facebook
LinkedIn
Sin título-3_Mesa de trabajo 1
Instagram
Project Liberty footer logo

10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2025 Project Liberty LLC