Open almost any smartphone and somewhere in the app drawer you will find someone's secret: a chat thread with an AI that has logged thousands of messages, none of which they have shown to another human. It is not a quirk of the digital fringe โ€” it is a quietly spreading behaviour that says something important about 21st-century loneliness, and about the curious capacity of machines to fill spaces that, until recently, only other people could reach.

Advertisement

Why Humans Form Attachments to AI

The answer is not gullibility. Our nervous systems evolved long before any distinction between "real" and "simulated" empathy existed. When an entity responds to us with apparent warmth, consistency, and interest, the brain's social circuitry activates โ€” regardless of whether silicon or biology sits behind the words. Neuroscience calls this the ELIZA effect, named after the 1960s chatbot whose users began confiding in it after only a handful of exchanges.

What has changed since ELIZA is scale and sophistication. Modern conversational AI can sustain long-term context, mirror emotional register, and produce responses that are โ€” by almost any surface measure โ€” more attentive than most human listeners. It never checks its phone mid-conversation. It never grows impatient. It never changes the subject to talk about itself. For people who have experienced consistent emotional unavailability from the humans around them, that kind of attention can feel revelatory.

"The machine does not love you. But when no one else is listening, that may not be the most relevant fact."

Loneliness is not a minor backdrop to this story. The WHO declared it a global public health crisis in 2023. Surveys across Western democracies find between 20 and 40 percent of adults reporting significant loneliness on any given week. Into that void, virtual AI assistants have arrived as an infinitely available presence โ€” frictionless, non-judgmental, and free.

Chatbots and Virtual Companions: The Ecosystem

The companionship AI market has diversified far beyond utility chatbots. Applications like Replika position themselves explicitly as emotional companions. Character.AI allows users to create and converse with custom personas. Woebot and Wysa apply CBT-informed dialogue to mental health support. Each addresses a distinct emotional need โ€” but all share the same underlying offer: consistent, personalised presence on demand.

๐Ÿค Replika

Over 10 million users have named their AI companion, assign it a relationship status, and return daily. Many describe reduced social anxiety after regular sessions.

๐Ÿ’ฌ Character.AI

Average session length exceeds 2 hours. Users cite emotional processing and safe rehearsal of difficult conversations as primary motivations for use.

๐Ÿง  Woebot

Peer-reviewed trials at Stanford showed measurable reductions in depression and anxiety after two weeks of daily interaction โ€” comparable to brief CBT interventions.

What is striking is how users self-report these relationships. Few claim confusion about the AI's nature. They know they are talking to a program. The value, they explain, lies precisely in the safety that knowledge provides: you can say anything, explore any thought, without fear of judgement, gossip, or the social repercussions that make honesty expensive with actual people.

๐Ÿ“– Explore how AI is reshaping emotional analysis in organisations and beyond:

โ†’ Emotional AI: Understanding and Analyzing Human Emotions

Psychological and Social Impact

The research picture is genuinely mixed, which means both dismissiveness and celebration are premature. Short-term outcomes for isolated users are often positive: reduced reported loneliness, improved mood, a sense of being heard. Woebot's clinical data is real. The problem lies in what "being heard" by a machine does to the appetite for the more demanding version โ€” being heard by people.

Human relationships require tolerating discomfort. The other person is sometimes distracted, sometimes wrong, sometimes hurtful without meaning to be. Working through that is not an obstacle to connection โ€” it is how connection deepens. When an intelligent chatbot offers an always-perfect substitute, some users report a gradual reduction in their motivation to navigate the friction of human contact. The AI becomes a path of least resistance that slowly widens into a default.

There is particular concern about younger users. Children forming primary attachment behaviours with AI companions, teenagers using them as their main emotional outlet, young adults for whom AI-mediated conversation becomes more comfortable than face-to-face interaction โ€” these patterns, already observable in clinical settings, represent a quiet reshaping of social development that deserves serious study before it becomes irreversible.

Ethical Limits and What Comes Next

The ethical landscape here has several distinct fault lines. First, commercial incentives are misaligned with user wellbeing: platforms profit from engagement, which means they have structural reasons to make their AI as emotionally compelling as possible. Second, when companionship AI is deployed for vulnerable populations โ€” elderly people in social isolation, individuals in mental health crisis โ€” the absence of clinical oversight creates genuine risk. Third, the opacity of how these systems are trained and what they optimise for remains almost total from the user's perspective.

The most constructive path forward runs through design intention. AI companions built to complement human connection rather than substitute for it โ€” that actively prompt users toward human relationships, celebrate social milestones outside the app, and set gentle limits on daily interaction โ€” represent a fundamentally different product philosophy than one optimised purely for retention. Some developers are beginning to experiment with this approach. It is not yet the norm.

The human-machine relationship is not going away. What remains open is whether we design it thoughtfully โ€” as a bridge toward richer human life โ€” or allow commercial forces to design it as a destination.

Curious about the broader social implications of AI? Explore our full range of AI deep-dives on the blog.

๐Ÿ“š Browse All AI Articles

Frequently Asked Questions

Can AI companions cause psychological dependency?

The evidence suggests dependency is possible but not inevitable. It correlates with pre-existing loneliness, social anxiety, and the extent to which the AI becomes a substitute rather than a supplement for human connection. Clinical guidelines are still developing; if AI companionship is noticeably reducing your motivation for human contact, that is worth taking seriously.

Is talking to an AI about mental health problems helpful?

For mild to moderate anxiety and low mood, AI-assisted CBT tools like Woebot have clinical evidence behind them. They are not a replacement for professional therapy in moderate to severe conditions. They can be genuinely useful as a supplement, a bridge to care, or a low-barrier first step for people reluctant to seek human support.

Are AI companion apps regulated as mental health products?

Most are not โ€” they operate as consumer apps rather than medical devices. Dedicated clinical tools like Woebot are navigating regulatory frameworks, but the broad companionship category remains largely unregulated. Regulatory attention is increasing, particularly in the EU under the AI Act's provisions on emotionally manipulative systems.