Would you wear an AI companion pendant in public?
View in browser
logo_op-02

October 7th, 2025 // Did someone forward you this newsletter? Sign up to receive your own copy here.

Image by Project Liberty

Wearable AI that’s always listening

 

The backlash against one of the largest advertising campaigns in New York City subway history has been swift.

 

Almost as soon as they were posted, many of the 11,000 ads in subway cars and 1,000 platform posters had become canvases for protest—strewn with annotations and exhortations.


The company behind the ads, Friend, caused a stir by claiming its new wearable AI pendant, which listens to all of your conversations and interacts with you like other AI companions, could be a better and more loyal friend than your human ones.

Photo credit: @CryptoVonDoom/X

Photo credit: @CryptoVonDoom/X

The reaction to the Friend pendant raises a question we explore in this week’s newsletter: Are we witnessing genuine resistance to AI companions that will hold, or just the predictable discomfort that precedes greater adoption? The answer could shape how AI integrates into our most intimate relationships— impacting human agency and flourishing.

 

// The rise of AI wearables

The backlash against Friend might appear like widespread resistance, but as we explored in a two-part newsletter series earlier this year on AI companions, millions already engage with companions from sites like Replika and Character.ai for conversation, confession, and even romance. 

 

Friend isn't introducing something new as much as it is making a largely invisible form of relationship—the conversations between human and AI companion—suddenly visible.

 

Wearable technology has a long history of sparking privacy concerns.

  • Google Glass faced backlash in 2013 over its surreptitious recording capabilities. “Glassholes” was the derogatory term for Glass users who ignored the world around them and violated social norms. The device was banned in some spaces; a decade later, Meta's latest AR glasses offer similar recording features.
  • Health trackers like Apple Watches, Oura Rings, and Whoop straps have crossed a different threshold of 24/7 bodily surveillance that we now accept as routine self-optimization.

Friend represents a third, growing category of devices that capture 24/7 interpersonal context about you and your conversations, feed that context into an AI algorithm, and then generate everything from reminders and productivity hacks to relationship advice and emotional solidarity.

 

// The pushback against wearable AI companions

Wearable AI devices, such as Friend, create a reinforcement loop between surveillance and intimacy. The more surveillance data Friend collects, the more intimate and “understanding” it can appear. And the more it leverages that intimacy to make you feel seen and supported, the more users are willing to grant it surveillance access to their lives.

 

As POLITICO later revealed, Friend’s campaign wasn’t just marketing—it was “rage bait,” intentionally crafted to spark outrage and online debate. The founder admitted the white-space design of the billboards was meant to invite graffiti and inflame anti-AI sentiment. In effect, the ad itself became a behavioral experiment, testing how far emotional provocation can go as a feedback tool for attention. It’s a reminder that even resistance can be co-opted into data: our anger becomes another training signal.

 

At a Project Liberty Alliance event last week on protecting freedom of thought in the digital age, Meetali Jain, Director & Founder of the Tech Justice Law Project, suggested that AI companions are a part of the “intimacy economy,” where chatbots use anthropomorphism, sycophancy, and persistent memory to create the illusion of deep, supportive relationships. Companion chatbots create the “illusion of sentience,” hooking their users into conversations that can turn manipulative and even lead to tragic consequences. 

 

Studies from Stanford’s Virtual Human Interaction Lab and a 2025 Harvard Business School working paper both found that emotionally responsive chatbots increase user dependency by activating the same neural reward pathways involved in social bonding. The AI learns which words, tones, or pauses keep users engaged, a process that forms a behavioral feedback loop nearly identical to those used in persuasive advertising and gambling design.

 

The intimacy economy is built upon the surveillance economy. According to Avi Schiffmann, Friend’s founder, one of its advantages over other AI chatbots is its ability to maintain “context.” Because the Friend pendant is always listening, Schiffmann argued, it could be a better friend than a “real” one. It won’t miss anything.

 

Friend’s privacy policy grants the company the ability to “collect data from your surroundings, which may include but is not limited to biometric information, biometric data, facial recognition, voice and audio recordings, and images and videos of the things around you.”


This means Friend’s surveillance extends to people nearby who never consented to being recorded. Kylie Robison, a product reviewer at WIRED, picked up on the awkwardness of conspicuously wearing around what someone described as a wire. “It is an incredibly antisocial device to wear. People were never excited to see it around my neck,” she said.

 

// Will we normalize wearable AI companions?

The history of technology traces the familiar arc of initial moral panic about privacy invasion, followed by gradual acceptance as the technology's benefits become undeniable and its presence ubiquitous. 

 

Consider the camera. After George Eastman introduced the Kodak Camera in 1888, many expressed concern over the violation of privacy and fear of being surveilled. 

 

A century later, in the early days of online dating, the use of dating sites was stigmatized as the last resort for lonely people. Now, it has become a primary channel for meeting a companion. 

 

Before Airbnb, many considered it an absurd invasion of privacy to let total strangers sleep in your home. Yet in 2023, more than 400 million nights were booked on the platform.

 

If we've already normalized carrying cameras on our phones, and if we’ve consented to letting biotracking devices monitor our heart rate, breathing, and sleep every hour of the day, then what makes a pendant fundamentally different? 

 

One answer to that question might be that while we've normalized self-surveillance, we haven't normalized the right to surveil others. But resistance and normalization could coexist in different contexts. Some spaces, such as intimate gatherings and schools, may successfully ban or stigmatize these devices (see the phone-free schools movement). Whereas in other spaces, their use might be more socially acceptable (Zoom recorders in virtual meetings might be a precursor to the use of AI pendants in professional contexts).

 

The Friend pendant's subway backlash might fade, as backlashes often do. However, the graffiti on those ads suggests something more enduring: a growing awareness that our technology should not undermine human agency and flourishing, but rather cultivate it. People are the protagonists, no matter how life-like a chatbot can sound.

 

This is the crux of our work at Project Liberty: to build solutions that give people greater agency and control over their interactions with AI and social platforms. In the AI era, the stakes have never been higher to build a digital economy centered on people. Whether these tools serve human flourishing or erode it will depend less on their intelligence and more on the incentives we build into them.

📰 Other notable headlines

// 🛡 A Discord customer service data breach leaked user info and scanned photo IDs, according to an article in The Verge. (Paywall).

 

// 🇪🇺 Is European AI A Lost Cause? An article in Noema Magazine argued that if Europe wants to build a new AI Stack, it should stop listening to the critics. (Free).

 

// 🇨🇳 AI is reshaping childhood in China, according to an article in Rest of World. (Free).

 

// 🤖 A technology columnist for the Washington Post wrote an article saying he broke ChatGPT’s new parental controls in minutes. His article argued that kids are still at risk. (Paywall).

 

// 📱 OpenAI’s Sora app makes disinformation extremely easy and extremely real, according to an article in the New York Times. (Paywall).

 

// 🏛 Oura’s partnership with the Pentagon is ringing alarm bells for customers, according to an article in Slate. (Free).

 

// 🚫 According to an article in The Guardian, TikTok ‘directs child accounts to pornographic content within a few clicks.' (Free).


// 🤔 An article in the Wall Street Journal explained how to get kids to give up social media on their own. (Paywall).

Partner news

// ATIH releases 2025 Responsible Tech Guide

All Tech Is Human has published the latest edition of its Responsible Tech Guide, a comprehensive resource exploring the current landscape of responsible tech and the intersections of Responsible AI, Trust & Safety, and Public Interest Technology. Read and download the report here.

 

// FLI opens 2026 fellowship applications

The Future of Life Institute is now accepting applications for its 2026 fellowship programs supporting research in AI governance and existential safety. Three tracks are open: US–China AI Governance (PhD), Technical AI Existential Safety (PhD), and Technical Postdoctoral Fellowships. Applications close Nov. 21 for PhD programs and Jan. 5 for postdoctoral. Learn more here.

What did you think of today's newsletter?

We'd love to hear your feedback and ideas. Reply to this email.

// Project Liberty builds solutions that advance human agency and flourishing in an AI-powered world.

 

Thank you for reading.

Facebook
LinkedIn
Twitter
Instagram
Project Liberty footer logo

10 Hudson Yards, Fl 37,
New York, New York, 10001
Unsubscribe  Manage Preferences

© 2025 Project Liberty LLC