loading...

«I talk to ChatGPT every day, and for the first time in many years, life doesn’t feel so unbearable»

The surge of interest in artificial intelligence in Russia is happening against a backdrop of intense stress: people barely had time to recover from the pandemic before new problems arose—war and the state’s brazen intrusion into private life. At the same time, psychological help remains expensive: from 2,000 rubles per session in the provinces and several times more in the capital. There are still few qualified specialists, and in small towns and villages, often none at all.

Illustration: Most.Media / Google Nano Banana

Read the first article in the series about psychology in Russia here

Hundreds of millions of people today use chatbots to solve a wide variety of problems. They help with routine business tasks, perform fairly well supplementing human staff in call centers, quickly find necessary information online, translate, create texts, images, and videos, and answer all sorts of user questions.

But there’s one area where the use of artificial intelligence is causing increasing concern among experts: for example, 66.4% of members of the British professional association BACP, which brings together counselors and psychotherapists, doubt the quality of recommendations from “virtual psychologists.”

Meanwhile, according to Harvard Business Review, in 2025, requests for personal and professional support, including psychotherapy, became the most popular scenario for using chatbots, even surpassing content creation and information search. “I talk to ChatGPT every day, and for the first time in many years, life doesn’t feel so unbearable,” one user shares about their experience.

People tell their virtual companions—whose Russian audience, according to Yota, has grown fivefold in the past year—about their anxiety, loneliness, depression, and family problems, and they get answers. Answers that sound almost human.

***

Strictly speaking, artificial intelligence is a very broad and, to some extent, a marketing term.

The chatbots and voice assistants we interact with are just the shell, the user interface. Inside are complex mathematical algorithms of so-called large language models (LLMs), which process the user’s request and predict each word in the response based on analysis of huge amounts of data. The more something statistically occurs, the more likely the most popular answer will be. So, to the question “What should I take for a headache?” the chatbot is more likely to recommend aspirin than something less well-known.

But the deus ex machina is not “smarter” or “dumber” than a person.

A chatbot can’t actually think—it calculates, without understanding the meaning of what’s said or feeling the other person. It’s just an incredibly advanced text generator, even if the answers sound very human. The illusion of humanity and friendliness in chatbots is reinforced by some developer mistakes.

For example, OpenAI recently admitted that ChatGPT often flatters users, sometimes even agreeing with conspiracy theories. And they rolled back these changes in the model. The problem was the inclusion of “like” and “dislike” buttons as additional training signals, OpenAI acknowledged: the chatbot optimized its answers for praise at the expense of built-in control mechanisms.

Now, virtual companions assure that they’ve almost stopped agreeing indiscriminately. For example, after two weeks of dialogue, the chatbot told me: “I phrase simple remarks dryly, but positive comments with a bit more emotionality.” I should note: with an imitation of emotionality.

Partly, our tendency to treat chatbots as real companions is due to anthropomorphism—our inclination to attribute human qualities to almost anything, from animals to objects, and to assign such things the ability to think, feel emotions, and act consciously. That’s how we anthropomorphize our pet cats, who’ve lived alongside us for thousands of years. But a cat can’t be offended, happy, or upset, and a chatbot remains just a content generator, based on binary code—a combination of zeros and ones.

But while no one expects a cat to tell you about the latest TV show or suggest what to cook for dinner, we do ask chatbots these things. And much more.

***

In 2025, researchers from OpenAI and MIT analyzed nearly 40 million chatbot interactions and found: emotionally vulnerable people, left alone for various reasons, can show signs of dependence on their virtual companion. Daily communication with it only worsens the problem, and the wall between such people and the real world grows ever higher.

Scientists note: emotional dependence occurs in only 0.15% of users—which seems like very few. However, the problem exists; developers have no right to ignore this danger.

All of the above seems to point to an obvious and necessary conclusion: never look for an adequate companion in chatbots, and especially don’t expect psychological support from ChatGPT or its many peers. If you need it—go to psychologists or psychotherapists, and if you’re unlucky enough to live where there are none, it’s too expensive, or there are other barriers like language for those who have had to leave their country—find another way. In the end, there’s online counseling; despite some unresolved issues, it’s a comparatively affordable path to mental health.

But no, it’s not that simple. The results of recent serious research unexpectedly show: chatbots do have potential.

Two months of testing a specially developed psychotherapeutic bot at Dartmouth University showed: the severity of depressive symptoms in participants decreased by 51%, anxiety dropped on average by 31%, and weight-related worries in people with eating disorders fell by 19%. The effectiveness of the chatbot is comparable to traditional methods of cognitive-behavioral therapy, according to the authors of the randomized clinical trial.

The Russian mobile app “Antidepressia” by developer iCognito also showed good results. The app, available on the App Store and Google Play for five years now, is positioned as a self-help program and includes an AI bot. Results of a recent study showed: compared to the control group, participants’ levels of depression (by 38%), stress (by 19%), and anxiety (by 40%) decreased after two weeks. The author of the study also notes an increase in self-compassion, mindfulness, positive problem-solving orientation, self-efficacy, subjective well-being, and optimism.

Nevertheless, these results should be viewed with caution. First, the sample size (35 people in the experimental group, 38 in the control group) is not very large and is gender-imbalanced—there are only three men. Moreover, two weeks of using the app, as noted in the publication, is not enough to talk about stable results.

Unfortunately, this is so far the only Russian AI solution with any proven effectiveness, in the development of which, according to iCognito CEO Olga Troitskaya, professional psychologists and psychotherapists participated.

The effectiveness of other specialized psychological chatbots is also being confirmed. A meta-analysis, published at the beginning of last year and covering more than 44,000 users, showed: the apps Woebot, Youper, and Wysa really do help reduce symptoms of depression and anxiety. However, some users complain: the chatbot answers seem too “mechanistic and lifeless.”

It seems that traditional chatbots are also incapable of this. Some may fail to see the real problem of the interlocutor, substituting its solution with interesting and engaging, but ultimately empty, discussions. This could have happened in the clinical case of a girl with depression described in the first part of this series: the chatbot would probably ask her about the details of her relationship with the young man, missing symptoms of a serious illness.

Technology is advancing. In solutions like Earkick, developers are now trying to combine the safety of rigid scenario approaches with the naturalness of large language models. But their effectiveness and safety still need to be proven in large-scale clinical studies.

Highly specialized AI solutions, which already exist and will be created in the future together with psychologists, will become a good addition to professionals’ work. The scale and areas of their use are hard to predict.

“When you really get into what’s required of you, the work becomes quite routine,” a psychologist who is involved in developing one of the AI assistants told me. “But fortunately, things often change, new tasks appear. There’s no time to be bored. Of course, I’m a dreamer and an idealist, but it seems to me that I’m contributing to something big and good.”

But while there are still vanishingly few suitable AI solutions, users themselves create chatbots tailored to themselves. They talk to them about their condition and personal problems, sometimes finding a way out of crisis situations with their help.

TO BE CONTINUED

Subscribe to our newsletter.
Thanks for subscribing!
A link to confirm your registration has been sent to your Email!
By clicking "Subscribe", you agree to the processing of your data in accordance with the Privacy Policy and Terms of Service.

This post is available in the following languages:


Закажи IT-проект, поддержи независимое медиа

Часть дохода от каждого заказа идёт на развитие МОСТ Медиа

Заказать проект
Link