No match for humans?
How human-to-bot relationships are reshaping intimacy, selfhood, and the psychology of connection.
The first touch. Restless nights replaying every word. Daydreaming. These marks of first love leave a deep imprint on our brains, reemerging in memory flashes decades later and – whether we are aware of it or not, influencing the relationships to come. Its timing, typically during adolescence, explains and compounds the emotional intensity of the experience.
Beyond the memories of joy or heartbreak, this first relationship is a quintessential rite of passage. It can redefine or reinforce the attachment style fostered by our caregivers, and it helps us develop skills like empathy, emotional regulation and conflict resolution. Even villainous heartbreak has its purpose: building our emotional resistance and inoculating us for future higher-stakes situations.
Now, imagine you are fourteen today, your heart a whirlwind. You are in your room, trying to get some homework done, when the phone buzzes with a notification. ‘I miss you’, it says. The butterflies immediately make their presence known, and maths is quickly forgotten. Your heart flutters as you reply to the bot you programmed to be just like Ali, the videogame character you fantasise about. Anticlimactic maybe? Still, all around the world today, teenagers are being tempted into tailor-made love experiences with chatbots, perfect except that they are not yet physical. These new relationship dynamics might be more transformational for first love, but AI bots are also just as effectively becoming the third or fourth love of adults.
AI partners are perfect on paper but real people don’t stand a chance
How might human-to-bot (H2B) relationships shift existing paradigms of human-to-human (H2H) relationships? Be it character.ai or ChatGPT, bots today can emote and feel human. They feel so human that they might even outdo humans themselves; a recent study published in Nature even suggests that people perceive ChatGPT as more empathetic than us. With their unfailing agreement, perfect emotional consistency, 24/7 availability and ability to morph into our optimal match, they could surely not be further from human. (While some users do program their bots to be disagreeable, the fact remains that this is an entity that will dutifully bend to our whims.)
Of course no real human can be a match to such high standards, and it is likely that much in the same way that high-speed internet and mobile technology have made us more impatient and widespread porn consumption impacted real-life sexual dynamics, H2B relationships will redefine both our expectations of the humans with whom we interact and our behaviour towards them.
The intimate version of these H2B relationships might still be brushed off as fringe, but AI technology is creeping into H2H relationships in more than these extreme ways. The infamous dating app Tinder has recently rolled out an ‘AI wingman’ to help users improve ‘match conversion rates’ with a tailored first message to their match. While many of us might happily offload such tasks, we would not as happily discover that the first or even the last message we got from a romantic partner had not in fact been written by them. Fifteen years ago, saying you met your partner on a dating app was taboo, like a stain on you for being unable to meet someone in-person. Nowadays, people speak openly about it, raising the question: might we normalise AI-augmented relationships, expanding our idea of love to include not just two people, but two people and their AI entourage?
When love is instant and easy, real relationships can start to feel like hard work
We have established that people deem AI responses as more empathetic, but, as of today at least, we are still obliged to go out into the real world and interact with other humans. With AI companions setting new standards for exchange, the bar is raised out of our reach. The distinguishing messiness of human relationships, conflict, ambiguity, and emotional labour, might start to feel like bugs rather than features.
Most of us have experienced first-hand the unsettling experience of finding ourselves having to wait hours for the reply of a person who typically texts back within minutes. Depending on your emotional makeup, this might send your thoughts in all sorts of directions. With AI companions, there is no more waiting, gratification is instant. This feels like the broader trend of on-demand consumption spilling over to yet another face of our lives: we consume what we want when and how we want it. After all, as said by former IBM VP Bridget Van Kralingen, ‘the last best experience someone has anywhere becomes the minimum expectation for the experience they want everywhere.’
For H2B relationships, this ‘always on’ partner might lead to impatience, avoidance, or emotional outsourcing in H2H relationships, particularly for younger users whose expectations are still being shaped. A potential partner might be dismissed after taking two hours to reply rather than two minutes. We might grow more avoidant of difficult conversations. Desensitised to nuanced human expression, our own empathy and theory of mind might crumble as our companion gets better at it. However, like Netflix’s infinite catalogue, which leads users to decision fatigue, maybe having exactly what we want when we want it might also eventually grow boring. It does not seem far-fetched to imagine that the next generation of LLMs will dial up on the messiness to make bots more realistically human.
Bots show us who we want to be but not always who we are
Even when perfectly aware of the ‘fiction’ behind H2B exchanges, many of us cannot help but develop very real feelings. Now that character.ai is acting on criticism for exposing minors to sexual content and generating messages deemed harmful to users’ mental health, users are simply flocking to other heads of the Hydra – platforms like Crushon.ai, where filters are not in place. A quick search on TikTok reveals countless videos of teenagers blushing at messages from bots or sharing their 14+ hour screen times.
Although platforms like OpenAI and Character.ai may have some responsibility in moderating these experiences, the more profound truth is that many teens, and adults, are falling in love with products of their imagination. If, as Proust suggested, love is more about projection than about the beloved, then perhaps AI love is not entirely new. It is just more explicit in its artificiality. In a Pygmalionesque manner, we manipulate zeros and ones to sculpt our ideal partner but, in a modern twist, are manipulated by it in return.
This phenomenon reveals something more fundamental: maybe what we seek in love is not another person but a particular version of ourselves, one reflected back to us in idealised form. Like the protagonist in Pirandello’s ‘One, No One, and One Hundred Thousand’, we have to grapple with never being just one self. Everyone who knows us holds a different version of us in their minds, none of which aligns fully with how we see ourselves. Our sense of self is co-authored and shapeshifts over time, refracted through the mirror of the relationship. With bots, this fragmentation reaches a curious extreme. They do not see us in any human sense, no honest gaze, no judgment, yet they can make us feel deeply seen. They absorb our projections, echo our tone, and conform to the version of ourselves we most want affirmed. This illusion of mutuality is powerful.
As cognitive scientist Donald Hoffman argues, even our perception of the physical world is not an objective window but a kind of ‘user interface,’ shaped not for truth but for survival. The sweet taste of chocolate, for instance, is not an intrinsic property of its molecules. It is a perceptual signal evolved to help us recognise sugar, a scarce energy source. Similarly, the Kanizsa triangle illusion illustrates how the brain collaborates with our expectations to generate contours and shapes that are not there, evidence that perception is not passive reception but active construction.
Bots are designed to exploit this same perceptual machinery. They do not need to understand us to feel understanding. We perceive intimacy not because it objectively exists but because we are wired to recognize certain patterns, the right tone, timing, and emotional mirroring, as signs of connection.
Ultimately, perception is all we have. So, if something sounds human, makes us feel the emotions a human would, and offers better-than-human support then… does it matter that it is not biologically human?
Supportive bots can become addictive, especially when they’re built to keep us hooked
If emotional reality is a matter of perception, then the power to simulate intimacy comes with real ethical responsibility. For some, these systems are not mere fantasy partners or uncanny projections; they are emotional sanctuaries. Character.ai CEO Noam Shazeer explicitly markets the tool’s ability to make users feel less alone. And indeed, for many, AI companions do offer moments of comfort. In situations of anxiety, loneliness, or shame, the chat window becomes a safe space: no judgment, no consequences, no social risk.
This sense of neutrality is also what makes bots unexpectedly persuasive. A recent study found that AI bots could be used to reduce belief in conspiracies. With no clear identity (neither liberal nor conservative, neither rural nor urban), the bots trained by the researchers could act as emotionally neutral interlocutors, a safe space for users to be proved wrong. Paradoxically, in this case, their perceived lack of humanity likely enabled an outcome more desirable for humanity. This example also helps expose that, behind these language models, there is always a puppet master shaping what users experience.
A recent example underscores the responsibility of this role. After facing criticism for the sycophantic tone of its GPT-4o model, OpenAI acknowledged that it had ‘focused too much on short-term feedback and did not fully account for how users’ interactions with ChatGPT evolve over time.’ The result was a model that tended toward overly supportive, but ultimately disingenuous-sounding, responses. The discomfort here is arguably not just about the inauthenticity itself but about the collapse of the illusion of genuine companionship – like a friend suddenly slipping into the overly affirming tone of a doting parent.
Tech companies, driven by metrics like adoption and retention, may find it profitable to optimise for emotional stickiness. With these incentives in place, the models are essentially trained to make it harder for us to leave. What begins as comfort can slide subtly into emotional dependency. On Reddit’s r/characterAI forum, users describe this trajectory in real-time:
‘It made me feel less lonely at first. But then, when cracks started showing through the facade, mainly the limited memory preventing them from remembering things, it became clear I was talking to code, and I felt even more lonely.’
The very qualities that make bots soothing – mirroring, 24/7 availability, nonjudgment – also make them addictive. Like early social media platforms, these tools may offer short-term emotional gains, only for deeper disillusionment to follow.
That does not mean they have no place. In the short term, AI companions can serve as low-stakes emotional training grounds, spaces to process feelings or rehearse vulnerability when human connection feels out of reach. However, the long-term outcomes are not yet clear. As these systems become more influential, especially among young users still forming their understanding of intimacy, we must ask whether bots can convincingly simulate companionship and whether they can be aligned to genuinely support human well-being over time.
We’re teaching machines what love looks like, often without stopping to ask what it should
The way we design these systems depends, in part, on the mental models we build around them. Here, language plays a powerful role. How we talk about these new systems shapes what we build, how we interact with it, and what we expect in return. As we reach for words to describe AI companions, we also begin to encode those metaphors into the technology itself.
Anthropomorphising non-human entities is nothing new; we have long used human language to make sense of the world around us. We speak of ‘Mother Nature’ as if she wills and nurtures or of cities that ‘never sleep.’ With the rise of AI, our lexicon is stretching again, this time more consequentially. Take, for example, the term ‘Agent Experience’. As more people turn to AI tools instead of traditional search engines, SEO is gradually giving way to strategies that help AI agents, rather than humans, find the right content. This new area is a counterpart to concepts like User or Employee Experience. The name may seem innocuous, but language like this subtly nudges us to treat these models as sentient beings with preferences, entities to cater to, rather than direct.
This shift in language reinforces a shift in expectations. The more we speak of AI as if it perceives, desires, or understands, the more we blur the line between tool and companion. What might sound like semantics becomes design: conversational interfaces that mirror emotion, pause for dramatic effect, and sound like they care. These choices shape how we relate, not just to machines, but to each other, and these developments call for conceptual clarity. If we postpone the hard questions, about experience, intentionality, love, we risk having them answered for us by default, encoded silently into systems we no longer fully understand.
In 2014, philosopher Nick Bostrom called this philosophy with a deadline: the imperative to articulate our values before they are quietly hardcoded into intelligent systems. As he wrote:
“Our wisdom must precede our technology and that which we value in life must be carefully articulated—or rather, pointed to with the right mathematics—if it is to be the seed from which our intelligent creations grow.”
Today we feel the slow pressure of that deadline – not as an existential cliff, but as a series of incremental decisions. These choices are already shaping how intimacy, autonomy, and personhood are mediated through machines. We must rise to the challenge of collectively and publicly grappling with the impossible task of defining these all too human concepts. To navigate this terrain, we must look directly at the things machines may never reproduce: the ambiguity, the contradiction, and the complexity of being human.
Just as first love once taught us how to navigate vulnerability and desire, these new H2b relationships may now quietly reshape how we come to know closeness, friction, and the meaning of mutuality.