By Igor Kulatov
The term “Artificial Intelligence” misleads us about what these systems can actually do
These days, it’s easy to surprise people with a simple sentence regarding AI. I check every single output of my AI dialogue. People not from the industry are often surprised by that. They assume that if you work with AI systems professionally, you trust them more. In fact, as with, I believe, every professional area, the more experience you have, the more bullshit you can catch.
I have real transcripts where I ask the model why it wrote something completely wrong, and it apologises — says it “looked in the wrong database” or “connected the wrong things.” But a system catching its own mistake would require actual understanding. What happened instead is that a pattern machine generated a plausible response to my new input. Because who wouldn’t like to be said to “It’s not your fault, it’s mine”? And, sadly, most people who don’t treat AI as a deeply professional instrument don’t see the difference between an actual apology and a trigger phrase.
And the most expensive mistakes start exactly there.
The name is part of the problem
“Artificial intelligence” is marketing, not a technical description. It worked:Ñ„ everyone knows it, everyone uses it, there’s no point trying to change it now. But the name created an expectation that the technology was never designed to meet. If it had been called something more accurate but less catchy — a statistical language model, a pattern completion system — far fewer people would have tried to use it as a therapist or a business partner. The name made that feel reasonable.
It’s been said billions of times already, but you need to repeat it to your relatives, friends from non-IT areas, and so on. They need to learn by heart: what these systems actually do is much simpler than trying to be intelligent. They look at the data they were trained on and generate the most probable continuation of your input. Period.
The model runs on tracks that were laid during training; it cannot step off them. Ask it something that requires genuine reasoning, and it will produce something that sounds confident and may be completely wrong.
Ask it whether you should walk or drive to the gas station that’s 100 m from your place, and it will advise you to have a nice stroll in the neighbourhood, which sounds grammatically perfect but totally senseless.
AI still feels like intelligence
Imposter syndrome never entered the chat with chatbots. The AI dialogue always answers you quickly and confidently, and reads like someone who knows what they’re talking about. That’s enough for most people to start projecting human qualities onto it — judgment, intuition, the ability to understand their situation. None of that is happening.
The smoothness is a byproduct of training on enormous amounts of human writing. The model has one task: look at what’s in front of it and build a response. There is no understanding of your context, your history, or what you actually need. But we, human beings, still try to replace therapists, mentors, and friends with these tools. And, occasionally, it succeeds.
The roles it cannot fill
The opportunity to get answers in a polite manner, with a sense of compassion and understanding, makes us think the AI is human. In our mind, we create a person and treat it like someone who can breathe and feel emotions. Many projects in psych tech or ed tech succeed based on this assumption.
But a therapist’s work requires ethical judgment, risk assessment, and the ability to sense what a person isn’t saying — to detect fragility before it becomes a crisis. AI has no concept of risk management; it was never intended to, nor is it built into its architecture.
The same goes with a teacher: this professional needs to diagnose where a student actually is — their gaps, their motivation, their emotional state — and adapt. Building that picture of a specific person takes months. A language model responds to what you typed.
Also, we can’t call a chat our friend or companion, even though it answers nicely and seems to understand us. Friendship is built on shared memories of real events and real intentions over time. That’s what creates trust and loyalty — not statistical familiarity with someone’s text inputs. ChatGPT can answer your questions. It cannot be your friend, and in the near future, that won’t change.
If you want to argue against the nearest future statement, remember the driving development. No sane person will fall asleep at the wheel of a Tesla — not now, not in ten years. It can assist on a motorway, correct. But it does not drive like an experienced driver on an unfamiliar road in the rain. An experienced driver processes everything in parallel, mostly unconsciously — peripheral vision, the feel of the road, and what the car ahead is about to do. The brain handles all of this without conscious involvement, and no camera or sensor array can match it in genuinely novel situations.
We find comfort in something that cannot actually understand us.
When businesses get it wrong, it becomes structural. Tools get deployed in contexts that require precision, ethical judgment, or emotional sensitivity. The results disappoint because it was never capable of what the name implied. Decisions get handed to something that cannot be accountable for outcomes.
What it’s actually built for
I use these tools heavily. I assume you do the same. For summarising large volumes of information, finding patterns, and handling well-defined tasks quickly, they’re genuinely powerful. That’s the technology working as designed.
The right way to think about it is like this: these systems extend what a human can do. They don’t replace what a human is. We don’t even fully understand how human thinking works at a basic level — so building something that actually thinks remains an open scientific question, not an engineering one.
Treat it as an instrument. Just don’t hand in the things that require actually understanding you.








