The reaction to the Friend pendant raises a question we explore in this week’s newsletter: Are we witnessing genuine resistance to AI companions that will hold, or just the predictable discomfort that precedes greater adoption? The answer could shape how AI integrates into our most intimate relationships— impacting human agency and flourishing.
// The rise of AI wearables
The backlash against Friend might appear like widespread resistance, but as we explored in a two-part newsletter series earlier this year on AI companions, millions already engage with companions from sites like Replika and Character.ai for conversation, confession, and even romance.
Friend isn't introducing something new as much as it is making a largely invisible form of relationship—the conversations between human and AI companion—suddenly visible.
Wearable technology has a long history of sparking privacy concerns.
- Google Glass faced backlash in 2013 over its surreptitious recording capabilities. “Glassholes” was the derogatory term for Glass users who ignored the world around them and violated social norms. The device was banned in some spaces; a decade later, Meta's latest AR glasses offer similar recording features.
- Health trackers like Apple Watches, Oura Rings, and Whoop straps have crossed a different threshold of 24/7 bodily surveillance that we now accept as routine self-optimization.
Friend represents a third, growing category of devices that capture 24/7 interpersonal context about you and your conversations, feed that context into an AI algorithm, and then generate everything from reminders and productivity hacks to relationship advice and emotional solidarity.
// The pushback against wearable AI companions
Wearable AI devices, such as Friend, create a reinforcement loop between surveillance and intimacy. The more surveillance data Friend collects, the more intimate and “understanding” it can appear. And the more it leverages that intimacy to make you feel seen and supported, the more users are willing to grant it surveillance access to their lives.
As POLITICO later revealed, Friend’s campaign wasn’t just marketing—it was “rage bait,” intentionally crafted to spark outrage and online debate. The founder admitted the white-space design of the billboards was meant to invite graffiti and inflame anti-AI sentiment. In effect, the ad itself became a behavioral experiment, testing how far emotional provocation can go as a feedback tool for attention. It’s a reminder that even resistance can be co-opted into data: our anger becomes another training signal.
At a Project Liberty Alliance event last week on protecting freedom of thought in the digital age, Meetali Jain, Director & Founder of the Tech Justice Law Project, suggested that AI companions are a part of the “intimacy economy,” where chatbots use anthropomorphism, sycophancy, and persistent memory to create the illusion of deep, supportive relationships. Companion chatbots create the “illusion of sentience,” hooking their users into conversations that can turn manipulative and even lead to tragic consequences.
Studies from Stanford’s Virtual Human Interaction Lab and a 2025 Harvard Business School working paper both found that emotionally responsive chatbots increase user dependency by activating the same neural reward pathways involved in social bonding. The AI learns which words, tones, or pauses keep users engaged, a process that forms a behavioral feedback loop nearly identical to those used in persuasive advertising and gambling design.
The intimacy economy is built upon the surveillance economy. According to Avi Schiffmann, Friend’s founder, one of its advantages over other AI chatbots is its ability to maintain “context.” Because the Friend pendant is always listening, Schiffmann argued, it could be a better friend than a “real” one. It won’t miss anything.
Friend’s privacy policy grants the company the ability to “collect data from your surroundings, which may include but is not limited to biometric information, biometric data, facial recognition, voice and audio recordings, and images and videos of the things around you.”
This means Friend’s surveillance extends to people nearby who never consented to being recorded. Kylie Robison, a product reviewer at WIRED, picked up on the awkwardness of conspicuously wearing around what someone described as a wire. “It is an incredibly antisocial device to wear. People were never excited to see it around my neck,” she said.
// Will we normalize wearable AI companions?
The history of technology traces the familiar arc of initial moral panic about privacy invasion, followed by gradual acceptance as the technology's benefits become undeniable and its presence ubiquitous.
Consider the camera. After George Eastman introduced the Kodak Camera in 1888, many expressed concern over the violation of privacy and fear of being surveilled.
A century later, in the early days of online dating, the use of dating sites was stigmatized as the last resort for lonely people. Now, it has become a primary channel for meeting a companion.
Before Airbnb, many considered it an absurd invasion of privacy to let total strangers sleep in your home. Yet in 2023, more than 400 million nights were booked on the platform.
If we've already normalized carrying cameras on our phones, and if we’ve consented to letting biotracking devices monitor our heart rate, breathing, and sleep every hour of the day, then what makes a pendant fundamentally different?
One answer to that question might be that while we've normalized self-surveillance, we haven't normalized the right to surveil others. But resistance and normalization could coexist in different contexts. Some spaces, such as intimate gatherings and schools, may successfully ban or stigmatize these devices (see the phone-free schools movement). Whereas in other spaces, their use might be more socially acceptable (Zoom recorders in virtual meetings might be a precursor to the use of AI pendants in professional contexts).
The Friend pendant's subway backlash might fade, as backlashes often do. However, the graffiti on those ads suggests something more enduring: a growing awareness that our technology should not undermine human agency and flourishing, but rather cultivate it. People are the protagonists, no matter how life-like a chatbot can sound.
This is the crux of our work at Project Liberty: to build solutions that give people greater agency and control over their interactions with AI and social platforms. In the AI era, the stakes have never been higher to build a digital economy centered on people. Whether these tools serve human flourishing or erode it will depend less on their intelligence and more on the incentives we build into them.