What happens when we anthropomorphize AI?
View in browser ([link removed] )
December 2nd, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here. ([link removed] )
Your brain thinks AI is human
In the 1990s, researchers at Stanford published a paper that would influence the design of technologies for decades.
Clifford Nass and Byron Reeves found ([link removed] ) that people treat computers as if they were human and practice politeness when interacting with them.
It sounds far-fetched. But humans have an extraordinary capacity to anthropomorphize non-human things. Today, many people say ([link removed] ) “please” and “thank you” when interacting with ChatGPT—a habit echoed in a 2019 Pew study showing more than half of smart-speaker owners did the same with Alexa or Google Home. Sam Altman has even noted that these polite flourishes alone add “tens of millions of dollars” in cost to OpenAI’s systems.
When AI chatbots can create the illusion of a romantic relationship with a user on the other side of the screen, new questions arise about how we design AI systems that respect human autonomy and protect against manipulation and fraud. In this newsletter, we explore what responsible AI interaction might look like in an age of increasingly human-like technology.
// Why we anthropomorphize machines
Anthropomorphism has been around for centuries. Today, we name our cars, boats, and hurricanes. We curse at our computers when they freeze, and we describe the stock market as “jittery” and afternoon breezes as “gentle.” Anthropomorphizing helps us make sense of a complicated world, turning the unfamiliar into familiar and creating a shorthand in storytelling and understanding.
But with sophisticated AI chatbots, anthropomorphism takes on new meaning and weight. These systems can appear, engage, and respond to us in interpersonal ways that feel strikingly intimate and human. It’s why many people ([link removed] ) are convinced they’re talking to something real and sentient.
// The risks of pretending to be human
When chatbots mimic human qualities, people bond. For some, the bond is benign and complementary to the other relationships in their lives. For others, as we explored in a recent newsletter ([link removed] ) on the danger of AI companions, it can become entangling, addictive, and difficult to separate from their real social world. In many cases, this is by design.
Today's AI chatbots are deliberately designed to maximize engagement. A study from earlier this year found specific ways that chatbots create “dark addiction patterns ([link removed] ) ” that are engineered to activate engagement similar to gambling. In an example of how addicted people are, last month, after Character.ai ([link removed] ) changed its policy ([link removed] ) so teens could no longer use its companion chatbots, teens were distraught: “I cried over it for days,” one said.
Systems with human-like qualities instill trust and elicit goodwill, making users vulnerable to manipulation and exploitation. Research ([link removed] ) published in the Proceedings of the National Academy of Sciences (PNAS) this year warned of “anthropomorphic seduction.”
When chatbots engage in human behaviors like reciprocal self-disclosure or displays of empathy, users become more trusting—and more susceptible to persuasion, privacy violations, and harmful advice. At its core, a chatbot is nothing more than a probabilistic pattern-completer, but it’s also easy for people to overestimate a chatbot’s reasoning abilities when it uses “I” and appears to “think.”
A four-week randomized controlled trial ([link removed] ) with 981 participants found that heavy chatbot usage was associated with increased loneliness, emotional dependence, and reduced real-world socialization. The more people used chatbots, the more isolated they became from other people.
AI companies have added disclaimers to their products; OpenAI displays “ChatGPT can make mistakes.” But these warnings might not go far enough. A small disclaimer cannot counteract the hundreds of design choices engineered to make the system feel human.
// Not all is lost
Anthropomorphism does have some benefits.
- AI chatbots that reflect human characteristics can be effective tutors in an educational context, delivering personalized learning at scale ([link removed] ) .
- One meta-analysis ([link removed] ) from 2024 found that using a chatbot can, in certain circumstances, reduce depression and anxiety, and another study ([link removed] ) found that using a chatbot can reduce loneliness.
But such benefits are only available in specific use cases, requiring clear guardrails and appropriate usage. For young internet users, the risk still might be too great. In a live conversation with Bari Weiss last month ([link removed] ) , Jonathan Haidt recommended limiting the use of AI chatbots for underage users until the effects have been thoroughly studied. “No children should be having a relationship with AI,” he said earlier this year ([link removed] ) .
Age restrictions are a start, and Character.ai ([link removed] ) ’s ban of its chatbots for teens is a positive step, but more is needed, including state-level laws that protect young users and place accountability on the tech companies themselves.
At an individual level, it might start with our own language. An article in The Conversation ([link removed] ) recommends the following steps to change our vocabulary:
- Add technical counterweights to any metaphors that frame AI in human terms. Clear explanations about exactly what the technology is doing can help remind us that this is a technology, not a sentient being.
- Avoid giving AI absolute, human-like agency. Replace certain verbs like AI “decides” with others like the LLM “recommends” or “classifies.”
- Emphasize the humans in the loop. “Naming developers and regulators reminds us that technology does not emerge from a vacuum,” the article says.
- Use fewer anthropomorphic images and descriptors. It might be easy to couch AI in human terms, but more intentional language can reframe the conversation to be more accurate.
// What's your perspective?
Can AI companies be trusted to self-regulate given the economic incentives for maximizing engagement? Do we need better disclaimers? More regulation? Should we eliminate the word “I” from chatbots? Reply to this email with your thoughts.
Other notable headlines
// 🧸 After a teddy bear talked about kink, AI and surveillance watchdogs are warning parents against smart toys, according to an article in The Guardian ([link removed] ) . (Free).
// 🏛 An article in TechCrunch ([link removed] ) considered how the race to regulate AI has sparked a federal versus state showdown in the U.S. (Paywall).
// 🏫 In an article in The New York Times ([link removed] ) , a professor explained that AI has changed his classroom, but not for the worse. (Paywall).
// 3️⃣ ChatGPT turned three years old. Since its November 2022 launch, ChatGPT has been a global phenomenon. An article in Rest of World ([link removed] ) looks at its impact on work and life globally. (Free).
// 🎙 In a Hard Fork podcast episode ([link removed] ) , the founder of Wikipedia explains how the site is responding to attacks and the culture wars. (Free).
// 🌪 Daniel Barcay of the Center for Humane Technology wrote an article in Tech Policy Press ([link removed] ) about how advertising is coming to AI. It’s going to be a disaster. (Free).
Partner news
// AI companions and kids: What families need to know
December 10 | Virtual
Children and Screens ([link removed] ) is hosting an Ask The Experts webinar, bringing together researchers, psychologists, and child psychiatrists to unpack how AI “friend” apps are shaping youth mental health and social development. Register here ([link removed] ) .
// Roundabout: A new digital space for local community life
New_ Public ([link removed] ) has launched Roundabout ([link removed] ) , a new web app designed with and for local communities. The platform emphasizes trusted exchanges, community stewardship, and real-world relationship-building over engagement-driven social media norms.
// Major ipdates on AI-powered bridge-building projects
Civic Health Project ([link removed] ) announced updates regarding its social cohesion initiatives, Normsy and The Forum. Normsy ([link removed] ) has expanded its constructive interventions in toxic online threads, generating over three million impressions. The Forum ([link removed] ) is piloting large-scale deliberative democracy programs across multiple states.
What did you think of today's newsletter?
We'd love to hear your feedback and ideas. Reply to this email.
// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.
Thank you for reading.
Facebook ([link removed] )
LinkedIn ([link removed] )
Twitter ([link removed] )
Instagram ([link removed] )
Project Liberty footer logo ([link removed] )
10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe ([link removed] ) Manage Preferences ([link removed] )
© 2025 Project Liberty LLC