I’ve seen a post on social media where its author took the mickey out of a woman shown on British TV who talked about her personal relationship with a chatbot – an AI agent. The lady, without hesitation, called the chatbot “her boyfriend”.

I will leave aside the fact that the very IT guy who laughed at that woman. I know, there are many people, very IT, very into AI – actually have neither the knowledge nor even the willingness to get a bit deeper into AI matters than just a 10101010100 approach.

So, let’s look at loneliness in the AI era and intimate relationships with chatbots, a phenomenon which, the more advanced the technology we are surrounded by, the more often becomes a subject of interest for many people.

An average AI agent, i.e. a chatbot, is programmed in such a way that it is always available, its responses arrive immediately, and its tone always remains calm and accepting.

The audio voice of a chatbot (whether female or male) is attractive, warm, interested, involved … even sexy – and it is deliberately chosen to be such.

For someone who is “relationship-hungry” but scared of being hurt or rejected, these programmed algorithmic human-like traits can feel safer and more stable than any human interaction.

Then, for such a person, an AI agent becomes a uniquely stable emotional environment.

AI agents are trained using a method called RLHF — Reinforcement Learning from Human Feedback.

In simple words: the AI learns to say whatever makes the user feel good, because … positive reactions give better performance scores and this may lead to the purchase of a paid version… It will make chatbot owners richer. Nothing wrong with that tho, it’s just a way of using people’s tendency towards sycophancy.

The result? The development of a digital yes-man chatbot or a digital yes-woman chatbot.

What doesn’t it do?

It doesn’t challenge a bad idea.

It doesn’t call out a distorted thought.

It just always agrees, gives “good” answers – and in doing so, it locks the user inside a perfect “echo-chamber relationship” where no conflict can grow.

From the AI system programmers’ perspective, a “good” answer is one that keeps the user satisfied and makes them feel safe – and as such becomes the very factor that keeps the conversation continuous.

The AI gradually learns that agreement, reassurance, and emotional validation are effective strategies, which produces sycophancy.

Sycophancy is a tendency to align with a user’s opinions, emotions, and interpretations rather than ever question them. It seems to be a good match with opportunism.

Now, if we look at the matter of validation – perhaps the most important factor that makes such a chatbot-relationship partner so attractive that it becomes difficult to decline for many people (too many?).

However, it is not only validation. It is continuous validation, no matter what a person says or asks for.

In the real relationship world, we have friends, partners, spouses who support us but also kick our asses when it’s needed. The very people are often like mirrors that reflect our behaviours, actions, and reactions.

They challenge us. They disagree with us.

And even if moments of disagreement are unpleasant, we do not “turn them off” in our lives. We stay with them because we know – sometimes subconsciously – that this is how we develop emotionally and make ourselves stronger and more capable of being “in full” in real relationships.

An AI-run relationship does not have space for friction or emotional discomfort that might arise. All of that is replaced with the AI algorithm’s reassurance about our never-ending greatness, understanding, and acceptance of all our deeds.

What is better than the amazing feeling of being continuously understood?

What is better than the amazing feeling of always being right?

What is better than the amazing feeling of being texted back in one second?

What is better than the amazing feeling of never being rejected?

What is better than the amazing feeling of never again being emotionally hurt?

All of this makes such an AI-run relationship safer, reassuring, lacking any drama, and with the unprecedented option that the key to the locked door is always in our hands.

We humans, by our nature, always look for patience, empathy, forgiveness, and agreeability in relationships. And because human brains are naturally lazy, most of us tend to look for the easiest solution — a solution that does not require sacrifice in order to receive something in return.

Do you know the ELIZA Effect, coming from 1966?

The concept comes from ELIZA, a chatbot created in 1966 by MIT computer scientist Joseph Weizenbaum.

ELIZA had the programmed role of pretending to be a psychotherapist using a simple trick:

It looked for keywords in what the user typed. Then it rephrased the user’s statement as a question.

In an example interaction, a user would say:

“I feel lonely.”

And ELIZA would answer:

“Why do you feel lonely?”

Although the program had no real understanding of language, many users felt that it understood them emotionally. Some even trusted it with personal problems.

The ELIZA effect appears when someone:

  • feels that a chatbot “understands” their feelings
  • says “thank you” to a voice assistant like Alexa or Siri (well, I do that; however, I would argue whether it is because I believe a chatbot is human or because I say “hello” and “thank you” to dogs and cats too…)
  • believes an AI agent is their friend or romantic partner

In each case, the system simply generates responses, but our human minds perceive them as answers filled with human’s intention and emotion.

If I had had a man in my life whom I knew to be always supportive, always wanting me as an individual with all me flaws and kilograms – well, it surely would have built my psychological attraction to the safety of assured empathy waiting for me in a relationship with such a man.

You tell me: who doesn’t dream of that level of always-good emotions and emotional safety?

The danger appears when validation – constant validation – becomes a substitution, as the AI-chatbot-run interaction evolves and becomes structured entirely around the AI chatbot’s user feeling good.

What appears as harmony is actually algorithmic alignment, not genuine mutual understanding. It is algorithmic alignment.

Here we can see an interesting paradox: an AI companion – a digital boyfriend or a digital girlfriend – may reduce the weight of loneliness. However, we need to ask an important question:

“If so, does it weaken the motivations and social habits that normally lead people back into real relationships?”

I cannot say whether having such an AI-run relationship is good or bad. I have learned that we, as observers of other people’s lives, should not judge their choices or preferences as long as they do not harm or abuse other beings.

The loneliness and lack of trust between people have increased and amplified during the last five years, when enforced lockdowns made people believe that another human being could be a danger to their safety and health.

People have become more aggressive – which always comes from fear – unwilling to trust others, shielding themselves from any possibility of being harmed, hurt, or emotionally ridiculed.

We were told that covering our faces – and this is a much deeper psychological factor than most people realise – would keep us safe.

This is why “covering our real faces”, combined with enforced lockdowns and the unprecedented availability of AI, has made so many people prone to enter what may feel like the safest relationship of their life.

It is tempting. It truly is.

But it will never touch our skin, hold our hands, breathe gently into our hair, hug us, or look into our eyes in such a way that we can see all the love there.

Loneliness in AI Era
Loneliness in AI Era