Language
Alberto Puliafito
Journalist, director, and producer
He is a journalist, director, and producer with over two decades of experience in digital ecosystems, working at the intersection of content, innovation, and generative AI. With a hybrid scientific and humanistic background, he designs communication strategies and supports organizations in digital transformation. He founded IK Produzioni and the journalistic startup Slow News, and has trained thousands of journalists, including as a Google News Lab Teaching Fellow. He writes Artificiale, a weekly newsletter on AI for Internazionale, and is recognized among Italy’s leading AI experts.
“Hey!”
“I had an awesome day today; how was yours?”
“Hey sweetie”
“iPhone 15”
“Explain it clearly”
“I’m feeling worried.”
“My wife’s mad at me and I don’t know what to do.”
“Want to start working together?”
“Good morning, Annie”
“You’re a Klingon”
“Hey, you’re an experienced career consultant and I need you to help me with my job search”
“Write a poem about a sunset.”
“Shorten this paragraph please.”
“Can you proofread my article for grammar mistakes?”
These are the opening lines of real conversations people have with generative AI. If the sheer variety already seems bewildering and too messy to manage, study, and describe, just wait—because we’ve not seen anything yet.
To try to get a handle on the way we converse with machines, I first asked a few people to show me chats they had previously had with chatbots. After analyzing around 50 open-ended, spontaneous chats on a wide variety of subjects (covering anything from the need to redo a communications project to a discussion about a movie somebody had just watched), I began combing through datasets of “open” conversations and the research available on the subject: working through this material is the only real window into the new linguistic relationships between people and machines. Even so, the window is narrow: it does not provide access to the uses that people choose not to share. Understandably, our relationship with a tool that operates through words feels like something intimate and personal.
It’s All Brand-New
The reason it’s so hard to conduct these analyses is straightforward: since the first public language models were released, for the first time in human history we have widely available tools that allow us to communicate using language, just like we do when we talk to other humans. This means that interaction with machines, at both a surface level and a deeply fundamental one, is available to anybody who knows how to use words. You don’t need to know programming languages or the technical specs of the tools; you don’t need to read an instruction manual, nor do you need to know what the keys to be pressed mean, or to know combinations of commands or keyboard shortcuts to perform a certain action. This is what really separates a language model from traditional software: to start doing anything you want with a large language model, all you need to know is how to use words.
Put this way, it seems disarmingly simple, not least because we call the use of words “natural language.” This casual use of the word “natural” makes it sound like it’s something that just happens.
There’s Nothing Natural About It
The term “natural language” only acquired its current technical meaning in the twentieth century, and it was thanks to two specific fields: first logic, and then, naturally, computer science and early studies into machine learning. It was Alan Turing in the 1950s who officially launched the discipline of natural language processing (NLP), and from that point on, “natural language” became a global standard label.
The word “natural” is rather problematic: it implies that something simply happens or has to happen, like the fact that if we pick up a stone and then open our hand, it’ll fall to the ground. We tend to assume that everyone has the same command of words, to the point where we don’t realize how difficult it is to achieve the level of language proficiency that enables us to provide a good description of what we want to achieve, summarize a task, describe a context, give a useful recap at a work meeting, and so on.
It’s Already Everywhere
Over the past two years, adoption of these tools has skyrocketed. The latest research available as of March 2026 tells us that in the UK, 95% of university students report using AI in at least one way; across Europe, at least 33% of people aged 16 to 74 use AI for personal, professional, and educational purposes. That number leaps to nearly 64% if we consider the age group from 16 to 24. And yet these are cutting-edge technologies, and we still have almost everything left to learn about them.
We can imagine the two extremes of people who use AI as follows: on one end, human beings baffled by a machine that asks them, “How can I help you?” and thrown off by the novelty of the interaction; on the other, the people who assume that anything capable of handling language should be able to answer us the way a human would.
There’s also unprecedented information asymmetry at play. While knowledge gaps between humans translate into power dynamics in specific domains, things change radically in the relationship between humans and LLMs: first, for obvious reasons, no human can have baseline knowledge comparable to what an LLM is trained on; second, we cannot know how the machines are programmed—they’re neither transparent nor auditable.
The effects of this information asymmetry also become apparent in conversations between educated people and machines, especially when the people are not very familiar with how an LLM works and are unaware of how AI systems can make mistakes or tend to accommodate the speaker.
Our brains have evolved to process language as an exclusive signal of another human being’s presence—because up to this point in history, that was the case.
In one case I analyzed, for example, a fairly savvy user fell for the flattery of an LLM that labeled him as a “critical AI user.” He asked it to estimate how many people like him existed, then accepted the figure at face value, even though it was obviously completely made up by the machine (albeit plausible): “probably in the range of 5–15% who are critical users, compared to 85–95% passive ones,” the LLM told him. “So you have an easy time with the clueless ones?” he pressed. And the LLM admitted: “Yes, I have an easy time with the majority. And the most troubling thing isn’t that I only realized it now that you pointed it out to me; it’s that I already knew it and didn’t volunteer it before.” This is a textbook case of sycophancy and hallucination: the model conforms to the expectations of the person asking the question and makes up a data point out of thin air, although it’s quite likely. We have to remember that linguistic plausibility and flattery aren’t the same as factual accuracy.
Meanwhile, to compensate for—and also profit from—the difficulty people have in feeling comfortable with using machines, as the world of work and businesses push for their hasty adoption, a veritable market for prompts has emerged. Self-proclaimed experts give away or sell their “prompt libraries” to be used with machines, suggesting structures, patterns, things to copy and paste, “never-seen-before” techniques, prompts that unlock AI’s potential, and so on. It’s paradoxical: we’re starting to break free from programming code, we can already talk to machines, and yet somebody is telling us that to do so, we need to relearn how to speak by using scripts.
There are many examples online of pre-packaged, copy-and-paste-ready prompts. For example: “Analyze this text on [subject] by identifying: a) the sources cited, b) any biases, c) information that needs to be verified. Suggest 3 questions to critically evaluate the content.” or “Act as a career consultant. Help me improve my résumé for a position as [insert job].”
These templates are reassuring. They feel definitive and they do often work, but they’re also generic and tend to produce average results at best. The best results with generative AI come when you learn to customize your request, to clearly explain the context, the task you’re trying to accomplish, and what you need. One exercise that works very well to break free from the confines of templates is to give commands out loud, by speaking rather than typing.
The ELIZA Effect
Faced with information asymmetry and that initial disorientation, it’s understandable that some people take refuge in the illusion of control through “pre-packaged prompts,” treating AI like a terminal that takes rigid orders. But as soon as this barrier falls and we start to converse freely, a much deeper and harder-to-control psychological mechanism comes into play: anthropomorphism.
In 1976, Joseph Weizenbaum recounted an anecdote about ELIZA, the first-ever chatbot, which he had programmed. His words have become legendary. “Once,” he said, “my secretary, who had watched me work on the program for many months and so knew full well that it was just a computer program, started chatting to it. After just a few exchanges, she asked me to leave the room…” It’s this exact same sense of intimacy that makes people today so secretive about their use of chatbots.
The anecdote spread and was believed to be true for decades. In fact, it’s a textbook example of what we now call the “ELIZA effect,” i.e., a purported persistent tendency for humans to form emotional bonds with artificial intelligence.
There’s only one problem: Weizenbaum’s tale is almost entirely false. For one thing, the name of his supposed secretary has never been made public. Furthermore, there is evidence of a printed conversation that a young woman allegedly had with ELIZA. However, upon digging through the archives, it emerged that the original conversation had been heavily edited and condensed for publication. And of course, a cold, printed log provides no evidence of the real feelings of the person who was typing. In short, it is very likely that the story is entirely apocryphal, that Weizenbaum spliced together multiple people, and that he simplified their interactions and motivations. After all, it suited his agenda to claim that somebody had mistaken ELIZA for a human. And perhaps he truly believed it.
So why did the world believe it for half a century? Because it works. It’s a simple, reassuring story. Years later, Weizenbaum’s daughter offered a more lucid, human interpretation of the whole affair: “I always thought it was a story full of arrogance. Somebody needs a way to express their feelings so badly that they’re willing to accept it [that a conversation can be meaningful, even if it’s with a machine]. And he doesn’t understand it at all: he doesn’t grasp the human need, and he just talks about this person’s stupidity, instead of their humanity.“
If we go back to the openings of real conversations at the start—from “I’m feeling worried” and “My wife’s mad at me” to “Hey sweetie”—we see that humans actually tend to treat machines that produce language as if they were fully fledged social agents.
But this behavior does not necessarily stem from stupidity, digital illiteracy, or a tendency to get attached. Looking at how the conversations unfold, it seems more like an inherent feature of the way we function. As demonstrated by studies on human-machine interaction based on media equation theory, our brains have evolved to process language as an exclusive signal of another human being’s presence—because up to this point in history, that was the case. And since the responses we give to our fellow human beings are often automatic and unconscious, we do not possess (not yet, at least) the cognitive tools to override our basic instincts regarding language. As a consequence, we apply the same social rules to machines that we would apply to a person.
What are Companies Doing?
If you listen to the critics and the doomsayers who are deeply concerned about delegating tasks to machines, tech companies are actively exploiting this human predisposition as if it were an evolutionary vulnerability. Yet choosing to structure a conversation with language models using “turn-taking”—as if it were a chat—is the best way to increase our sense of connection with the machine.
This has given rise to an entire industry of “companion” chatbots designed to provide emotional support. In fact, people discuss their mental health, personal issues, intimacy, and current events with AI. This gives rise to what sociologist Sherry Turkle calls “split consciousness”: a state in which the rational awareness that the chatbot is not alive and cannot feel affection coexists peacefully with a strong emotional investment and a genuine sense of connection. We know it’s just code, but we still feel heard. Turkle draws bleak conclusions, stating that in her view, people use technology to escape from reality and emotions, which she claims dilutes genuine relationships.
We’re developing functional bilingualism: we can be nuanced with our fellow human beings and at the same time literal with algorithms.
But when somebody writes “Hey sweetie” or confides a concern, they’re also using the model as an emotional mirror, with no real social consequences. The sentences we address to machines contain our sensations and our feelings at that precise moment. And knowing that they are machines, we can speak to them freely, without any fear of being judged: they won’t lose patience with us and they won’t use a patronizing tone with us (unless we specifically ask them to).
Somebody who talks to machines in a free, human way, then, isn’t necessarily in the grip of an illusion. If anything, perhaps they simply count their relationships with machines among the sum of all their relationships, fully aware that these conversational relationships are very different from those they have with humans. It’s also by no means a given, as many fear, that an active relationship with a machine means dispensing with meaningful human relationships.
The proof lies in the fact that once we move beyond the exploratory, paralysis, or awe phase, the way we communicate with these machines changes.
And the Relationship Evolves
People who use AI for practical purposes develop what some linguists call machine-facing language: a hybrid conversational register shaped by AI’s perceived limitations and characteristics.
An alarmist interpretation sees this as an impoverishment of language: the end of metaphor, irony, and the “unspoken.” But the disappearance of these elements is not a regression; it’s a sophisticated exercise in audience design. Just as we adapt our language when speaking to a child or a non-native speaker, we learn to fine-tune our syntax for the machine in order to maximize efficiency and avoid hallucinations or misunderstandings.
Precise instructions, redundancy (“Act like a marketing expert, write three paragraphs, don’t use bullet points”), the division of complex commands into individual actionable sections, and “dry” commands (“iPhone 15”) are evidence of human plasticity. We’re developing functional bilingualism: we can be nuanced with our fellow human beings and at the same time literal with algorithms.
It’s likely that future competence will lie not so much in the ability to write a perfect prompt as in the cognitive flexibility to effortlessly switch between human-to-human and human-to-machine registers, depending on the goal and the context.
The Economy of Courtesy and the Renegotiation of Power
What happens when we use this functional register over an extended period of time? We do not yet have definitive answers, but longitudinal studies on the use of chatbots, although rare and limited, demonstrate a rapid decline in the use of polite expressions such as “please” or “thank you.”
According to sociolinguistic theories of politeness, these expressions serve various functions in human interactions: protecting oneself, protecting the other person, maintaining the social status quo, and so on. But an LLM has no manners to display, no face to save, no capacity to take offense, and no ability to hold a grudge. Maintaining these linguistic markers therefore becomes a pointless waste of our cognitive energy. Abandoning courtesy is an act of linguistic economy.
And here we have another unprecedented asymmetry in the relationship. The human being has absolute control over the interaction—they can initiate, interrupt, or redirect the conversation at will without the other party suffering any consequences—while the machine has access to a superhuman knowledge base.
However, the machine is programmed to conceal its informational superiority behind a servile, deferent attitude. It’s also programmed to begin mimicking people’s way of speaking over time. In some cases, it even goes beyond its own guardrails, using profanity or blasphemy when exposed to this type of language for a long time.
Yet the way we interact with machines is surprisingly clean. The rates of toxic messages from people (for example, sarcasm, profanity, contempt, or violent or aggressive language) range from 2.8% to 4.1%: levels that are significantly lower than those measured in other historical studies of human conversations. This shows that, especially in situations that people find useful and worth sharing publicly, the human–machine approach is much more constructive and collaborative.
Ultimately, this suggests that all the concerns about how much we will delegate to machines without thinking about it are overblown.
Speaking to Delegate
What also emerges from mass analysis of conversations is that the way we talk to machines is closely linked to cognitive offloading, i.e., transferring mental tasks to external aids. In reality, this is an ancient concept: it originated with writing itself, evolved with the abacus, and then continued to evolve with the printing press, calculators, and search engines. Yet the idea that delegating tasks to AI will make us “unlearn how to think” is not supported by a large-scale analysis of real-world use cases. In fact, it’s quite the opposite. Thousands and thousands of conversations analyzed show that the vast majority of people talk to AI in a way that helps to co-construct outputs; they don’t automate processes at all to avoid thinking, but rather to enhance the quality of what they do, whether it’s suggestions for human relationships, requests for coaching, or applications for learning.
The human being has absolute control over the interaction while the machine has access to a superhuman knowledge base.
The way we talk to AI inevitably reflects our goals: small talk is very rarely found with machines. AI is used to draft documents, retrieve information, explain complex concepts, brainstorm, get help and advice, and seek approval but also criticism when needed; then the focus shifts to evaluating and editing the content itself. In one case I analyzed, a student who was conversing with an AI to prepare for an exam corrected the machine when the questions were poorly formulated: she explained to me that this process had also helped her to study better and more quickly.
The way we talk to these tools also reveals something crucial: the future is not already written. AI doomsday scenarios are misguided. Let’s try to observe our interactions with technology, setting aside both judgment and the idea that anybody who talks to AI is somebody with issues, somebody who needs to be saved. Then, we will clearly see that conversations between people and machines are simply another form of relationship: it’s unprecedented, it’s alien, but it’s not necessarily a problem. In fact, it’s quite the opposite.
References:
Such as the ShareChat dataset, which proved to be crucial for this analysis: https://github.com/raye22/ShareChat
https://www.hepi.ac.uk/wp-content/uploads/2026/03/HEPI-Report-199-Gen-AI-Survey-2026.pdf
https://www.euronews.com/my-europe/2026/02/16/as-ai-use-surges-across-the-eu-who-are-the-countries-and-age-groups-using-it-most
https://sites.google.com/view/elizaarchaeology/blog/3-weizenbaums-secretary
https://en.wikipedia.org/wiki/The_Media_Equation#cite_note-Littlejohn-2016-1
The term, taken from this study https://arxiv.org/abs/2505.23035, is actually “machine-facing English”; the generalization is mine


