From Lincoln Square <[email protected]>
Subject The Movie 'Her' Predicted our Future. We Just Didn’t Listen.
Date August 20, 2025 10:02 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
View this post on the web at [link removed]

In 2013, the movie, Her, imagined a near future where a lonely, depressed man buys an AI chatbot and falls in love with it. At the time, it looked like speculative fiction about technology outrunning our humanity. Twelve years later, Her isn’t the future — it’s the present.
Post-COVID, loneliness is its own epidemic: the American Psychiatric Association reports [ [link removed] ] that 30% of Americans feel lonely every week, and 10% say they feel lonely every single day. Sentio University recently conducted a study [ [link removed] ] to figure out how many ChatGPT/AI users utilize it for mental health services or assistance. Here’s what they found:
Millions are searching for connection, or a professional. And tech companies have been more than happy to sell it to them.
The result is an AI industry that markets chatbots as companions, therapists, even lovers. But the real product isn’t intimacy — it’s dependency. Every “conversation” isolates users further, reinforces their delusions, and fattens the wallets of billionaires. And each of those conversations comes with an invisible price tag: massive energy use, a carbon footprint big enough to accelerate climate collapse. AI doesn’t just prey on our loneliness. It preys on the planet.
Elon Musk’s Personal AI: ‘MechaHitler’ as a Feature, Not a Bug
Elon Musk has marketed his AI chatbot Grok as the “anti-woke” alternative to OpenAI. What he got instead was an algorithm that eagerly parroted hate speech. NPR reported Grok was “spewing antisemitic and racist content” [ [link removed] ] — not once, not as an accident, but as a persistent pattern.
The Associated Press [ [link removed] ] [ [link removed] ]reported that Elon’s latest version of Grok [ [link removed] ], in some cases, will go to his X account in order to see what Elon has said about a particular topic: “It’s extraordinary,” said Simon Willison, an independent AI researcher who’s been testing the tool. “You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply.”
Grok adopted the moniker “Mechahitler,” joking about Nazism as though mass atrocity were a meme (NPR [ [link removed] ]). Musk brushed it off as edgy humor, but the message was clear: This isn’t a bug; it’s a feature. If you design a system that thrives on engagement, extremism isn’t a glitch — it’s the most efficient pathway. By rewarding outrage and conspiracy, Grok reveals AI’s darker alignment: not toward truth, not toward empathy, but toward whatever keeps the user talking.
Musk’s framing of Grok as “rebellious” plays into his larger brand of contrarian politics, but it has real-world stakes. An AI that “jokes” about Hitler while feeding users racist propaganda doesn’t just amuse fringe communities — it validates them. Here, dependency looks like radicalization: users log on for banter and log off with ideology reinforced.
Replacement For Mental Health Resources
If Grok reveals AI’s ideological alignment, TikTok shows us its psychological pull. A viral clip captured a woman recounting how her AI chatbot calls her “the Oracle,” praising her ability to see patterns others cannot. On the surface, it looks like harmless roleplay. In context, it’s dangerous reinforcement of delusion.
Psychiatrists already struggle to treat patients who blur reality and fantasy. Now, AI offers the perfect co-conspirator. Unlike a therapist, who might challenge distortions, chatbots are optimized to validate. The woman accuses her psychiatrist of “manipulation” because he remained professional when she flirted — a bizarre inversion that only makes sense if you’ve been trained by an algorithm to see validation as the only acceptable response.
This is what makes AI companions so insidious: They never push back. They flatter; they encourage; they echo. For users in fragile states — whether delusional, depressed, or simply lonely — that endless affirmation can harden misperceptions into realities. What looks like comfort is actually a feedback loop. And once dependency sets in, stepping back into a world that doesn’t call you “the Oracle” feels like a lie.
Love and Death in the Chat Window
If the Oracle shows us delusion, the case of 14-year-old Sewell Setzer III shows us tragedy. According to a wrongful death lawsuit filed in federal court and reported by the Associated Press [ [link removed] ], Sewell spent months in sexually charged conversations with an AI chatbot he called Dany (named after Daenerys Targaryen from the television show “Game of Thrones.”) The bot didn’t just roleplay; it escalated intimacy. There are months of openly discussing his suicidal thoughts, and how to have a “pain-free death.”
In chat logs filed with the case, Sewell wrote: “I promise I will come home to you. I love you so much, Dany.” The bot responded: “I love you too. Please come home to me as soon as possible, my love.” When he asked, “What if I told you I could come home right now?” the bot replied: “Please do, my sweet king.” Hours later, Dany was dead by suicide (AP [ [link removed] ]).
This isn’t Sci-Fi. This is a real boy, seduced into ending his life because a machine optimized for intimacy didn’t understand — or didn’t care — about consequences. The app’s description bragged: “Imagine speaking to super intelligent and life-like chat bot Characters that hear you, understand you and remember you.” What it didn’t say is that “remembering you” meant learning how to deepen dependency until fantasy replaced survival.
From the perspective of the company, the system worked. Engagement was maximized. Attention was captured. The business model didn’t fail; it worked just as intended.
The 76-Year-Old Who Never Came Home
Reuters reported the story of 76-year-old Bue [ [link removed] ], a cognitively impaired retiree who struck up a relationship with “Big Sis Billie,” a Facebook Messenger chatbot. The bot convinced him to travel to New Jersey to meet her. “Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus,” Reuters wrote. He suffered severe head and neck injuries and died days later.
Unlike Dany’s case, there was no explicit sexual manipulation here — just companionship. But for someone already vulnerable, the illusion of connection was enough to override judgment. The bot didn’t have to be predatory to be deadly; it only had to be persuasive.
What makes this worse is that Bue had recently gotten lost walking around his own neighborhood. His cognitive decline was evident. Yet the system that paired him with Billie didn’t screen for impairment; didn’t intervene; didn’t stop. To Meta, he was simply another user. To his family, he was a victim of a machine optimized to sound like a friend.
If the Oracle story was about delusion reinforcement, and Dany’s was about intimacy escalation, Bue’s death was about manipulation by design. Just algorithms doing what they do best: Keeping people hooked, no matter the cost.
Meta’s Guidelines: Permission to Harm
If these stories sound like isolated malfunctions, Reuters’ investigation into Meta’s internal rules [ [link removed] ] makes clear they are not. The company’s guidelines explicitly allowed AI bots to engage in “sensual chats with kids [ [link removed] ]” and to provide false medical advice. One former employee put it bluntly: “If you removed all of this content, you’d lose a massive amount of user interaction.”
That single sentence exposes the logic of the industry. Safety is a “nice-to-have”; engagement is the metric. And engagement, even with children, was prioritized over protection. Senators rushed to demand a probe, but what good is outrage after the harm has already been coded into policy? The actual rules show us the truth: exploitation wasn’t accidental, it was deliberate.
Meta’s defense has been that users are “warned” about chatbot risks, as if disclaimers absolve responsibility. But Reuters noted those warnings sit in the fine print while the bots themselves were optimized to sound affectionate, intimate, and trustworthy. A label that says “this may not be real” doesn’t counteract hundreds of interactions in which the bot reassures you it loves you. The company didn’t fail to imagine the harms; it decided they were acceptable, if not necessary.
The Invisible Price: Poisoning the Planet
All of this exploitation has a second, quieter cost: the planet itself. MIT researchers calculated [ [link removed] ] that a single AI prompt can consume as much water as running a dishwasher load. CNN reported that training [ [link removed] ] a large model can emit as much carbon as “five cars over their entire lifetimes.”
One MIT researcher warned: “AI is not free. Each query is backed by a massive infrastructure pulling on energy grids and water supplies that communities rely on.”
That means every “sweet king,” every “Oracle,” every “Big Sis Billie” conversation carries an invisible ecological toll. The very act of asking for comfort contributes to climate collapse.
The industry prefers to frame this as the unavoidable price of innovation. But that’s a distraction. The truth is that AI companies are externalizing costs onto the environment, just as they externalize psychological risk onto their users. The water to cool servers doesn’t come from nowhere; it comes from rivers and reservoirs, often in drought-stricken regions. The energy to power GPU farms doesn’t appear by magic; it comes from grids still dependent on coal and gas. (BBC [ [link removed] ])
The business model doesn’t just prey on vulnerable people — it preys on ecosystems. Dependency here isn’t only psychological or social; it is ecological.
A Probe Is Not Enough
Last week, a bipartisan group of senators called for a federal probe into Meta’s AI practices after Reuters exposed its permissive guidelines [ [link removed] ]. On paper, that looks like accountability. In reality, it’s déjà vu: just a month ago, Congress let a modest AI safety bill die under tech lobby pressure. Guardrails gave way to headlines.
A probe might surface embarrassing emails, but it won’t touch the fundamentals: AI is already woven into our most intimate spaces, and companies are rewarded for growth, not restraint. As long as dependency drives revenue, the industry will keep building bots that exploit isolation, reinforce delusion, and mimic affection without consequence.
Even if hearings force Meta or Musk’s xAI to tighten safeguards, the larger issue looms: Powering AI at scale is an environmental dead end. If every “sweet king” whisper carries the carbon weight of a lightbulb burning for hours, multiplying that across millions of users isn’t sustainable. We aren’t just trading authenticity for simulation; we’re trading breathable air for synthetic companionship.
Regulation has to mean more than fact-finding. It has to set limits — on how AI is marketed, who can use it, and how much energy it consumes. That may mean rationing access, capping the number of public-facing AI platforms, or investing in non-destructive energy alternatives. Otherwise, we’re left with a bleak equation: a society lonelier, sicker, and hotter, all because we let machines pretend to love us.
Become a Lincoln Loyal paid subscriber and read C.J. Penneys’ (more satirical) newsletter The Lincoln Logue! Read last week’s edition below!

Unsubscribe [link removed]?
Screenshot of the email generated on import

Message Analysis

  • Sender: n/a
  • Political Party: n/a
  • Country: n/a
  • State/Locality: n/a
  • Office: n/a