Language
Mafe de Baggis
Digital strategist, copywriter, and independent researcher
For 30 years, she has used storytelling to connect people, media, and products, helping companies, individuals, and organizations build a digital presence that is meaningful, harmonious, and consistent with their brand. She works with stories, relationships, and strategy, drawing inspiration from books, cinema, travel, all forms of intelligence, and the body—because marketing thrives when it is well nourished. Her latest books, co-authored with Alberto Puliafito for Apogeo, are In principio era ChatGPT (2023) and E poi arrivò DeepSeek (2025).
The Uncomfortable Third – April Issue
What have novels, social media and language models got in common? That, albeit in different ways, the pleasure derived from engaging with them depends to an enormous extent on the person reading, participating and collaborating.
Let’s start with books, where this is hardest to spot: we talk far too little about each individual reader’s role in completing the work in their own way.
Italo Calvino wrote in his essay Cybernetics and Ghosts in 1967 that the author’s self dissolves in writing, and that the decisive moment in a work of literature is the act of reading it. Machines – whether the ones Calvino imagined in 1967 or the real ones we have today – can generate infinite combinations, but literature happens in the shock that just one of these combinations produces in a real person, with a body, at a specific moment in time. We often pause to point out that language models don’t actually understand, but what strikes me more is that they don’t take pleasure in reading (and in understanding). They don’t know what it’s like to step into another world and lose yourself in it, blending it with your own.
Around the same time, Roland Barthes proclaimed ‘the death of the author’ to make the same point: meaning doesn’t arise at the source, but at the destination, in the encounter between text and reader. An encounter that is possible, but never guaranteed because only machines read everything that’s handed to them.
If meaning is made through reading, then it can’t be controlled, it can’t be certified and it can’t be scaled
This insight has been systematically removed from the debate on AI. Not refuted, but forgotten. Because it’s inconvenient: if meaning is made through reading, then it can’t be controlled, it can’t be certified and it can’t be scaled. You can’t package it into a paper, a canon or a dataset. The debate about writing remains on the writers’ turf, leaving us readers standing on the sidelines.
One of the few exceptions is David J. Gunkel, who brings this idea back to centre stage in Of Remixology: ‘Unlike the divine creator who supposedly fabricates something out of nothing, the author of a text – the author of this text in particular – is nothing more than a kind of conduit or DJ, whose sole authority remains limited to selecting, responding to and reconfiguring already existing materials.’ If human authors were already recombinators, rather than ex nihilo creators, then the distinction between human writing and machine generation becomes more nuanced. And more interesting, too.
Gurkel and Coeckelbergh go further in Communicative AI: ‘Destination, not origin, matters for meaning. Logocentrism has had things backwards and upside down. It is the reader, and not the author, who is the locus of meaning-making in writing.‘
In 2024, decades after Calvino, Ian McEwan has a character in What We Can Know say that a work of literature only truly exists in the minds of its readers. Plus, he adds, even in the minds of those who have merely heard of it.
There is a real risk of translating all this into the hollow, deterministic approach of ‘writing for the reader’. Language models do exactly that, and they do it with a precision no human author has ever achieved: trained on billions of human reactions, optimised for reader appeal, they construct a statistically impeccable ‘implicit reader’. They don’t choose to write for the reader: they don’t know how to do anything else, unless the reader-author explicitly or implicitly asks them to. It’s a danger, not destiny. Knowing this raises a question worth asking out loud: why not demand models that are programmed to disrupt rather than converge? To make god moves: the gambits that no human player would have dared attempt, the ones that look wrong and then flip the board, as happened when AlphaGo beat Go champion Lee Sedol. AI geared towards shock rather than implicit consensus. It’s not science fiction; it’s a design choice that we don’t yet know if we can even ask for. If fiction is a third language, created through an encounter that changes every time, a discerning reader is just as important as a brilliant author. Let’s focus on that.
The Third Language – January Issue
There’s a type of language that differs from both the bridge language and the truth language envisioned by Samuel Hartlib (Imminent): the language of storytelling. It’s a third language, perhaps a third-party, neutral language. Umberto Eco wrote that in the world of literature, statements such as “Sherlock Holmes was a bachelor,” “Little Red Riding Hood is eaten by the wolf and then rescued by the huntsman,” and “Anna Karenina kills herself” remain true forever—not because they describe the real world, but because they’re invented by an author who crafts inhabitable worlds. Shared worlds. Breathable worlds. Eternal worlds.
The paradox is illusory: The language they invent produces more stable truths than factual ones. Not because they’re verifiable, but because they’re accepted. Repeated. Experienced by those who have read them, remembered them, and transformed them. Literature doesn’t separate us from reality: It trains us to handle it. It’s a testing ground where we can distinguish meaning from hallucination. Understanding from bias. A little like when the writer and storyteller isn’t human, but software.
Language models aren’t truth machines. They’re not even machines of pure logic; that was the utopia of the old, symbolic artificial intelligence of the 1980s. Large language models are probabilistic machines, just like language itself. They’re trained to produce text that sounds true, not text that is true. The difference is immense. Their goal isn’t clarity, philosophical precision, or even plausibility. They function as scaffolding for thought, transformed into words; they work on form. As philosopher and artist Francesco D’Isa notes, “in my experience, a generative model resembles an exoskeleton for the brain.” An exoskeleton: It doesn’t think for you; it amplifies your movements. And it standardizes them. It makes them smoother, sure—but also more predictable.
AI’s language tends toward the middle ground, toward implicit consensus, toward a style that doesn’t ruffle feathers. Anything that deviates from this—whether dialect, creative error, personal obsession, or deliberate obscurity—is treated as noise to be eliminated. Yet it is precisely in this field of deviation that humans live and breathe.
The perfect language does exist, but it’s different for every single human being
One of the most widely shared critiques of AI-generated writing comes from Ted Chiang. Writing, Chiang says, isn’t just about putting words together in a plausible way. It’s about choosing: deciding what to leave out, where to stop, which ambiguity to conserve, when to remain silent. A statistical system can interpolate and optimize, but it doesn’t choose in the true sense of the word. It doesn’t take risks; it has no interests. Every choice that matters comes from a body that loses something in making it. A body that gets tired, that hesitates, that has to catch its breath before the next sentence. Machines don’t breathe. They can keep talking forever. But beware: It isn’t just the writer who has a body. The text will re-emerge, transformed, in the body of the reader. And the first reader of a text generated by a language model is the writer themselves.
At this point, talking about symbiosis between humans and machines is inevitable, but only if we go beyond merely consoling ourselves with an “I feel, you calculate.” True symbiosis is a shaky circuit. The machine accelerates, expands, and proposes. Humans interrupt, reject, and go back. They introduce friction. As Filippo Pretolani writes in “Il filo della R4,” the relationship between humans and machines is far from harmonious. It’s continuously retracing steps and missteps over a line that never crosses its own tracks. Machines tend toward statistical closure; humans break the thread at the wrong point. Narrative doesn’t arise from automatic generation. It’s born from embodied editing. From rewriting. From the word “no”. From somebody who has to stop for air before carrying on. To put it even more simply: Generative machines are excellent at creating drafts. They’re terrible at deciding when to stop. And that decision makes all the difference.
The mission for humans shifts elsewhere: to nurture imperfection as cognitive technology.
In a world of fluid communication, literature—and more generally, the language that serves not only to transmit but to create—becomes the place where we go to deliberately seek out friction. Not out of nostalgia, but for orientation.
The perfect language does exist, but it’s different for every single human being. It’s recombinatory, even when it’s recombining archives and vectors. It thrives on mistakes, accents, and personal obsessions, on intakes of breath and bouts of breathlessness. The machine can keep talking without ever stopping. It can generate text endlessly, always plausible, always smooth. The third language begins to blossom precisely when somebody has to pause for breath.
