Your brain thinks AI is human
In the 1990s, researchers at Stanford published a paper that would influence the design of technologies for decades.
Clifford Nass and Byron Reeves found that people treat computers as if they were human and practice politeness when interacting with them.
It sounds far-fetched. But humans have an extraordinary capacity to anthropomorphize non-human things. Today, many people say “please” and “thank you” when interacting with ChatGPT—a habit echoed in a 2019 Pew study showing more than half of smart-speaker owners did the same with Alexa or Google Home. Sam Altman has even noted that these polite flourishes alone add “tens of millions of dollars” in cost to OpenAI’s systems.
When AI chatbots can create the illusion of a romantic relationship with a user on the other side of the screen, new questions arise about how we design AI systems that respect human autonomy and protect against manipulation and fraud. In this newsletter, we explore what responsible AI interaction might look like in an age of increasingly human-like technology.
// Why we anthropomorphize machines
Anthropomorphism has been around for centuries. Today, we name our cars, boats, and hurricanes. We curse at our computers when they freeze, and we describe the stock market as “jittery” and afternoon breezes as “gentle.” Anthropomorphizing helps us make sense of a complicated world, turning the unfamiliar into familiar and creating a shorthand in storytelling and understanding.
But with sophisticated AI chatbots, anthropomorphism takes on new meaning and weight. These systems can appear, engage, and respond to us in interpersonal ways that feel strikingly intimate and human. It’s why many people are convinced they’re talking to something real and sentient.
// The risks of pretending to be human
When chatbots mimic human qualities, people bond. For some, the bond is benign and complementary to the other relationships in their lives. For others, as we explored in a recent newsletter on the danger of AI companions, it can become entangling, addictive, and difficult to separate from their real social world. In many cases, this is by design.
Today's AI chatbots are deliberately designed to maximize engagement. A study from earlier this year found specific ways that chatbots create “dark addiction patterns” that are engineered to activate engagement similar to gambling. In an example of how addicted people are, last month, after Character.ai changed its policy so teens could no longer use its companion chatbots, teens were distraught: “I cried over it for days,” one said.
Systems with human-like qualities instill trust and elicit goodwill, making users vulnerable to manipulation and exploitation. Research published in the Proceedings of the National Academy of Sciences (PNAS) this year warned of “anthropomorphic seduction.”
When chatbots engage in human behaviors like reciprocal self-disclosure or displays of empathy, users become more trusting—and more susceptible to persuasion, privacy violations, and harmful advice. At its core, a chatbot is nothing more than a probabilistic pattern-completer, but it’s also easy for people to overestimate a chatbot’s reasoning abilities when it uses “I” and appears to “think.”
A four-week randomized controlled trial with 981 participants found that heavy chatbot usage was associated with increased loneliness, emotional dependence, and reduced real-world socialization. The more people used chatbots, the more isolated they became from other people.
AI companies have added disclaimers to their products; OpenAI displays “ChatGPT can make mistakes.” But these warnings might not go far enough. A small disclaimer cannot counteract the hundreds of design choices engineered to make the system feel human.
// Not all is lost
Anthropomorphism does have some benefits.
- AI chatbots that reflect human characteristics can be effective tutors in an educational context, delivering personalized learning at scale.
- One meta-analysis from 2024 found that using a chatbot can, in certain circumstances, reduce depression and anxiety, and another study found that using a chatbot can reduce loneliness.
But such benefits are only available in specific use cases, requiring clear guardrails and appropriate usage. For young internet users, the risk still might be too great. In a live conversation with Bari Weiss last month, Jonathan Haidt recommended limiting the use of AI chatbots for underage users until the effects have been thoroughly studied. “No children should be having a relationship with AI,” he said earlier this year.
Age restrictions are a start, and Character.ai’s ban of its chatbots for teens is a positive step, but more is needed, including state-level laws that protect young users and place accountability on the tech companies themselves.
At an individual level, it might start with our own language. An article in The Conversation recommends the following steps to change our vocabulary:
- Add technical counterweights to any metaphors that frame AI in human terms. Clear explanations about exactly what the technology is doing can help remind us that this is a technology, not a sentient being.
- Avoid giving AI absolute, human-like agency. Replace certain verbs like AI “decides” with others like the LLM “recommends” or “classifies.”
- Emphasize the humans in the loop. “Naming developers and regulators reminds us that technology does not emerge from a vacuum,” the article says.
- Use fewer anthropomorphic images and descriptors. It might be easy to couch AI in human terms, but more intentional language can reframe the conversation to be more accurate.
// What's your perspective?
Can AI companies be trusted to self-regulate given the economic incentives for maximizing engagement? Do we need better disclaimers? More regulation? Should we eliminate the word “I” from chatbots? Reply to this email with your thoughts.