Language
Mafe de Baggis
Digital strategist, copywriter, and independent researcher
For 30 years, she has used storytelling to connect people, media, and products, helping companies, individuals, and organizations build a digital presence that is meaningful, harmonious, and consistent with their brand. She works with stories, relationships, and strategy, drawing inspiration from books, cinema, travel, all forms of intelligence, and the body—because marketing thrives when it is well nourished. Her latest books, co-authored with Alberto Puliafito for Apogeo, are In principio era ChatGPT (2023) and E poi arrivò DeepSeek (2025).
When something new enters our experience, language shifts to make room for it: it bends old words, combines new ones, and imports words from other languages, until that piece of reality finds a shared name.
With artificial intelligence, things turned out differently. In this case, the word came before the reality it was supposed to describe; and when the reality changed, the name remained.
In the 1950s, the idea actually was to artificially replicate human intelligence. We knew very little about how the mind works—and therefore about thinking and intelligence—but the task seemed plausible and even reasonable. When it became clear that this was not the case, the label “AI” (which had also been chosen because it was more appealing than “automation”) was not abandoned. It remained, dragging along ambiguities and misunderstandings, sometimes even somewhat instrumental ones.
Today the confusion is growing, because for the general public, AI is synonymous with language models, i.e., with what we call generative AI. If we’d called it “automatic writing,” perhaps we’d be having different conversations today.
The Silent Role of Analogy
To understand which words we bend when we talk about AI—and in particular about language models—it’s worth taking a step to one side and looking at a widely used but rarely mentioned rhetorical device, one that’s often confused with metaphor: analogy.
I’ll borrow a definition offered by Alfio Ferrara in his book Le macchine del linguaggio (“Language Machines”):
“Lexical analogy is the type of reasoning whereby we ascribe to two words a similarity that does not depend on their content, but rather on the fact that they share the same relationship with other words.”
In other words: if I say that a piece of software is “intelligent,” I will end up saying that it “thinks.” Even though I know it’s not intelligent and it doesn’t think, I accept that pairing because it works within the system of linguistic relationships we share.
We do this all the time. When we pay with our smartphone, we say “I’m paying with my card.” When we cross a border, we use a “passport,” even though we’re not on a ship. This is how language adapts: it doesn’t capture the world as it is; it reorganizes it based on relationships.
I can imagine a curious Gen Z wondering why you click on a little rectangle with a cut-off corner to save a document. And who are we saving it from? And what does a document document? If we start expecting every word to mean just one thing, regardless of context, we’ll end up speaking poorly, understanding even less, and possibly going mad.
How Language Machines “Think”
Let’s get back to language machines. Ferrara describes them as software that learns to “predict the words most likely to accompany a given word across the entire corpus.”
For example:
| mountain | snow | ice | peak | forest | cold |
| beach | sand | sea | wave | sun | hot |
| desert | dune | cactus | sand | sun | dry |
| forest | tropical | vine | sun | humid | hot |
| lake | fire | wind | snow | forest | cold |
| ice | Arctic | blizzard | snow | iceberg | freezing |
Words don’t exist in isolation: they exist in constellations, which language models translate into numbers and proportions. And if the term at the center of the constellation becomes “artificial intelligence,” the cloud surrounding it will be made up of terms consistent with the analogy chosen:
| AI | thinking | reasoning | trains of thought | attention | chat | talk | personality |
It’s more than just linguistic flexibility. It’s a functional choice, too. In terms of the user experience, writing “I’m thinking” is more helpful than “transformer in action,” even though neither phrase implies that the software is actually thinking. In short: the analogy you choose shapes the words you use. And sometimes it traps you.
This harmony between words—the way they fit together, in sound and rhythm—is the same terrain on which writers, translators, and copywriters work. They don’t just choose the right words; they choose words that interlock without a hitch.
From Intelligence to Beast
So far, everything has been relatively straightforward. The problem arises when the language police enter the picture: If you use the analogy of intelligence, you are humanizing machines too much; if you say that they “think” or “learn,” you don’t understand how they work. It’s a bit like claiming that when somebody declares “I’m paying with my card,” they believe their smartphone transforms into a flimsy piece of plastic.
At the same time, however, another, almost opposite analogy has gained traction: intelligence has given way to beastliness. The most well-known definition is Emily M. Bender’s: “stochastic parrot.”
A dataset is “fed” to the model, which must “digest” it, with “training” aimed at avoiding “regurgitation” and “hallucinations.” In a single sentence, we have covered an entire semantic field: from a ravenous animal to an incontinent madman. This is an emerging language, created through imitation and without any precise agreement: people working in this field adopt it almost unconsciously. Whereas previously the risk was over-humanizing AI, now the risk is animalizing it.
If we start expecting every word to mean just one thing, regardless of context, we’ll end up speaking poorly, understanding even less, and possibly going mad.
It’s paradoxical. If we were ever to achieve superintelligence, it would lack precisely the animal aspects: instincts, hormones, emotions, and embodied cognition. For centuries, we’ve attempted to separate mind and body, as if the former were noble and the latter a bothersome biological vessel destined to deteriorate. Yet we are animals with extraordinarily intelligent bodies, even when they malfunction or age. Bodies that do not distinguish between thought and action, an abstract distinction, as studied and described in Francisco Varela’s “enactivism.” And we are social animals: we cannot become fully intelligent without others.
Software doesn’t have a body; it doesn’t have emotions, and it probably won’t even have them when we equip it with sensory prosthetics. It will be able to move. It already can. It may have data-driven intuition, but not instincts. Five simulated senses, maybe. But no sixth sense.
Despite this paradox, the animal analogy has been successful, and it’s not hard to understand why. It serves to put things into perspective, to reassure us, to put a fence around something that frightens us. If it’s nothing more than a parrot, we don’t need to worry too much. If it merely regurgitates copied text, we can remain the true protagonists of the story. Animalization is an act of defense more than a description.
The Problem Behind the Analogy
The problem is that this analogy, like the one about intelligence, also bears implicit baggage. Beasts are tamed, caged, and put down when they become dangerous. The vocabulary of beastliness subtly introduces a vocabulary of domination: us on one side, the machine on the other, and in between, a power dynamic that we must keep in our favor. “Alignment” becomes synonymous with control instead of dialogue.
There’s also another irony hidden in this metaphor. Parrots—the real ones, not the stochastic ones—don’t repeat without understanding: they learn context, recognize speakers, and associate words with situations. They’re much more sophisticated than the cliché suggests. Using parrots as a symbol of linguistic vacuity perhaps says more about our conception of animals than the nature of language models.
A language model isn’t a mind, and it’s not an animal. It’s a correlation machine trained on vast archives of human text. It doesn’t “understand” in the phenomenological sense of the word, but nor does it “repeat” like a parrot: it rearranges patterns on a scale that no human could handle.
And then there is the word that has helped to shape the image of the ailing beast more than any other: “hallucination.” Technically, it refers to a response that is plausible in form but incorrect in content. However, the choice to use this term, which was created to describe actual instances of the model losing control, is not neutral: it evokes imbalance, a loss of contact with reality, something that needs to be treated or feared. But a model that hallucinates isn’t losing its mind: it’s doing exactly what it was trained to do, namely producing linguistically coherent sequences based on a prompt. The problem is not with the beast; it’s with the way we’ve built it, with the task we’ve assigned it, and with the way we use it, like an oracle whose answers are not to be verified, questioned, edited, improved, or supplemented.
The Third Language
More from the author
Language is not only a tool for communication — it is the space where meaning is shaped, challenged, and transformed. In a world increasingly dominated by algorithms and generative text, this series asks an essential question: What happens when meaning cannot be reduced to probability or prediction? A quarterly column exploring language, storytelling, and the spaces where meaning resists automation.
It’s curious that as we analyze machines, we continue to disembody ourselves. We talk about “biases” as if they were flaws in the model, forgetting that these biases are the result of human cultural legacies. We talk about “training” as if we were taming a tiger cub, when in fact it’s a process of logical and mathematical optimization. We use the word “ethics” as if it were a plug-and-play module, a feature that we humans master effortlessly, not a complex web of situated choices.
The paradox is that what makes human intelligence what it is, is not logical coherence but friction: a body that gets tired, an emotion that veers off course, a mistake that teaches us a lesson. Our thinking is situated; it’s relational. It’s not just inference: it’s experience.
It serves to put things into perspective, to reassure us, to put a fence around something that frightens us.
Language models, in contrast, operate in an abstract space. They can simulate emotional registers, but they can’t feel them. They can imitate anger or joy, but they don’t tremble. They can describe the cold, but they don’t have a skin. And for this very reason, they complement us; they don’t replace us.
Perhaps the real crux of the matter is this: both analogies—intelligence and beastliness—project onto software something that belongs to us. Firstly our dreams, and secondly our fears. Neither of them truly describes what’s actually going on.
Is It a Bullfight or a Dance?
Treating software like an animal is no more neutral than treating it like a person. If we truly want to align AI with human values (the values of all humans), positioning the relationship as a bullfight isn’t a great idea: the tamer can turn into prey in an instant.
Perhaps the most interesting relationship is neither fusion nor confrontation, but a dance. A choreography made up, first and foremost, of words, because the way we talk about these systems is no minor detail; it’s the first step in deciding what kind of relationship we want to have with them. It all starts with the words we choose to use while we await the other, more precise words that will emerge from curious, unafraid speakers.


