Translated's Research Center

The Third Language 

A quarterly column exploring language, storytelling, and the spaces where meaning resists automation.


Language

Mafe de Baggis

Mafe de Baggis

Digital strategist, copywriter, and independent researcher

For 30 years, she has used storytelling to connect people, media, and products, helping companies, individuals, and organizations build a digital presence that is meaningful, harmonious, and consistent with their brand. She works with stories, relationships, and strategy, drawing inspiration from books, cinema, travel, all forms of intelligence, and the body—because marketing thrives when it is well nourished. Her latest books, co-authored with Alberto Puliafito for Apogeo, are In principio era ChatGPT (2023) and E poi arrivò DeepSeek (2025).


There’s a type of language that differs from both the bridge language and the truth language envisioned by Samuel Hartlib (Imminent): the language of storytelling. It’s a third language, perhaps a third-party, neutral language. Umberto Eco wrote that in the world of literature, statements such as “Sherlock Holmes was a bachelor,” “Little Red Riding Hood is eaten by the wolf and then rescued by the huntsman,” and “Anna Karenina kills herself” remain true forever—not because they describe the real world, but because they’re invented by an author who crafts inhabitable worlds. Shared worlds. Breathable worlds. Eternal worlds.

The paradox is illusory: The language they invent produces more stable truths than factual ones. Not because they’re verifiable, but because they’re accepted. Repeated. Experienced by those who have read them, remembered them, and transformed them. Literature doesn’t separate us from reality: It trains us to handle it. It’s a testing ground where we can distinguish meaning from hallucination. Understanding from bias. A little like when the writer and storyteller isn’t human, but software.

Language models aren’t truth machines. They’re not even machines of pure logic; that was the utopia of the old, symbolic artificial intelligence of the 1980s. Large language models are probabilistic machines, just like language itself. They’re trained to produce text that sounds true, not text that is true. The difference is immense. Their goal isn’t clarity, philosophical precision, or even plausibility. They function as scaffolding for thought, transformed into words; they work on form. As philosopher and artist Francesco D’Isa notes, “in my experience, a generative model resembles an exoskeleton for the brain.” An exoskeleton: It doesn’t think for you; it amplifies your movements. And it standardizes them. It makes them smoother, sure—but also more predictable. 

AI’s language tends toward the middle ground, toward implicit consensus, toward a style that doesn’t ruffle feathers. Anything that deviates from this—whether dialect, creative error, personal obsession, or deliberate obscurity—is treated as noise to be eliminated. Yet it is precisely in this field of deviation that humans live and breathe.



One of the most widely shared critiques of AI-generated writing comes from Ted Chiang. Writing, Chiang says, isn’t just about putting words together in a plausible way. It’s about choosing: deciding what to leave out, where to stop, which ambiguity to conserve, when to remain silent. A statistical system can interpolate and optimize, but it doesn’t choose in the true sense of the word. It doesn’t take risks; it has no interests. Every choice that matters comes from a body that loses something in making it. A body that gets tired, that hesitates, that has to catch its breath before the next sentence. Machines don’t breathe. They can keep talking forever. But beware: It isn’t just the writer who has a body. The text will re-emerge, transformed, in the body of the reader. And the first reader of a text generated by a language model is the writer themselves.

At this point, talking about symbiosis between humans and machines is inevitable, but only if we go beyond merely consoling ourselves with an “I feel, you calculate.” True symbiosis is a shaky circuit. The machine accelerates, expands, and proposes. Humans interrupt, reject, and go back. They introduce friction. As Filippo Pretolani writes inIl filo della R4,” the relationship between humans and machines is far from harmonious. It’s continuously retracing steps and missteps over a line that never crosses its own tracks. Machines tend toward statistical closure; humans break the thread at the wrong point. Narrative doesn’t arise from automatic generation. It’s born from embodied editing. From rewriting. From the word “no”. From somebody who has to stop for air before carrying on. To put it even more simply: Generative machines are excellent at creating drafts. They’re terrible at deciding when to stop. And that decision makes all the difference. 

The mission for humans shifts elsewhere: to nurture imperfection as cognitive technology.

In a world of fluid communication, literature—and more generally, the language that serves not only to transmit but to create—becomes the place where we go to deliberately seek out friction. Not out of nostalgia, but for orientation. 

The perfect language does exist, but it’s different for every single human being. It’s recombinatory, even when it’s recombining archives and vectors. It thrives on mistakes, accents, and personal obsessions, on intakes of breath and bouts of breathlessness. The machine can keep talking without ever stopping. It can generate text endlessly, always plausible, always smooth. The third language begins to blossom precisely when somebody has to pause for breath.