Translated's Research Center

You Say AI, I See Peonies

Can AI understand flowers? Patrizia Boglione, VP at Translated, examines the anthropological and branding implications of flowers and the challenges AI faces in interpreting the plurality of senses.


Trends

Every time I’m in a meeting with my colleagues, talking about the future of artificial intelligence, technology, and how it intersects with our lives, after a while I find myself picturing a huge peony.

It happened to me once before, during a zombie video game demo. At first, I thought it was my way of distancing myself from a dystopian future that conjures up cold, dehumanized worlds. I’m a creative, a designer born in a pre-digital world. I’m always focused on the sensory experience, and maybe that lush, fragrant flower seemed to shield me from a future I don’t want to live in.

As I continued to search for answers, I realized my peony wasn’t an escape. It was a concrete question: why doesn’t AI understand flowers? Because to understand a flower, you need a body. You need to breathe in its scent, feel the weight of its petals, recognize the rhythm as it opens and closes, and the models we now call intelligent haven’t yet learned to truly inhabit the world.

The news was broken by Designboom on June 4, 2025,1 linking to a paper in Nature Human Behaviour2 and a press release by Ohio State University (OSU).3 A study by the university argued that large language models (such as ChatGPT or Gemini) don’t handle sensory-rich concepts—for example, “flower”—in the same way that humans do. The reason is simple: they lack the firsthand experience of smell, touch, movement, and more.

The researchers compared how humans and LLMs encode 4,442 English words by validating their representations against the Glasgow Norms—a dataset of normative human ratings for 5,553 words across psycholinguistic dimensions, including concreteness, imageability (how easily a word evokes a mental image), and affective properties such as arousal, valence, and dominance4. The verdict: LLMs do well on non-sensorimotor features, but stumble dramatically when senses and movement get involved. Various models have been tested (GPT-3.5/4 and the PaLM/Gemini families), and even the most advanced ones exhibit a “gap” in concepts rooted in bodily experience.


Why doesn’t AI understand flowers? Because to understand a flower, you need a body. You need to breathe in its scent, feel the weight of its petals, recognize the rhythm as it opens and closes, and the models we now call intelligent haven’t yet learned to truly inhabit the world.


To get closer to human understanding, we need sensory grounding and interaction with the world (e.g., robotics, real multimodality). Words alone won’t cut it—and neither will text plus images.

This realization shifted my whole perspective. If large language models can describe almost anything, but trip up when things get sensory, then the stakes for people who work with brands are not just an inaccurate text, but a fundamentally untrue experience. Here, the two perspectives that Imminent brings to the table—anthropological and brand—come together naturally: the senses build ways of knowing and speaking, and when it enters physical and multisensory space, an AI that learns from real- world experience can offer new tools to transform that knowledge into broader, fairer, more situated brand narratives.

Anthropologically, senses aren’t merely window dressing for language: they’re its inner grammar. Coolness can mean purity or detachment; red can signal a celebration or a warning; silence can be a luxury or cause for discomfort. Touching a peony is not just about feeling a surface; it’s moving through memories, norms, and rituals.
It’s knowledge that comes before language, but which shapes language and guides its meaning. That’s why translation today can no longer be limited to shifting terms from one language to another: translating is about crafting equivalent experiences. It’s about using different ingredients to create the same lived effect. The goal is to elicit the same emotion, build the same trust, and invite the same action.

For brands, the arrival of AI grounded in real-world observation and action isn’t a new, special effect, but a seismic shift in the ecosystem. Intelligence is no longer disembodied, symbolic, all eyes and no hands; it’s starting to actively participate. It feels, measures, modulates, and moves. In this transition, branding is no longer just a logo and copy—it’s a sensory contract rooted in real places. I don’t expect the machine to understand a peony; I’m asking it to collect signals—such as humidity, light, tiny movements, and tones—and to present them back to us, humans, so that we can orchestrate them with precision. Technology puts things in order, but we deal with the meaning. This is how augmented narratives are born. They’re not one-way spectacles, but vibrant encounters: experiences in which brands regulate temperature, rhythm, texture, sound, and brightness with good manners, asking permission, announcing their uncertainty when they need to, and making space for silence when they need to.


Translation today can no longer be limited to shifting terms from one language to another: translating is about crafting equivalent experiences.


This meeting between anthropology and brand strategy makes my hypothesis concrete: Physical AI will only be truly meaningful if it pushes us toward plurality. A plurality of senses, because visuals alone are not enough; a plurality of languages, because we also speak with touch, rhythm, and distance; a plurality of intelligences, because human, machine, and environment can negotiate experience together; an ethical plurality, because dignity, consent, and accessibility are not a footnote, but the very heart of the project. It’s not an invitation to confusion: it’s a refinement. We move from recognition at first sight to recognition through encounters. The same melody, different arrangements.

This is where I come back to my peony. If I shift my gaze from the label to the mark an experience leaves on the body, everything realigns. Identity isn’t flattened out; it’s modulated. It maintains a stable sensory signature, that unique way a brand “feels,” and finds local dialects that do not betray the core, but translate it. In a northern European context, an experience may speak softly, with clear light and spacious pauses; in the Mediterranean, it will be warmer and closer; in a region accustomed to engineering precision, it will make the rationale behind its gestures explicit. It’s the same narrative, but it’s delivered at the right distance and with the right rhythm.


Localization

Localization

Imminent Section

Step into Imminent’s localization section, where leading voices share sharp insights on the value of languages and its key role in market expansion.

Read More

Understanding this also means recognizing that the most important metrics will change. Not just clicks, reach, or viewing time, but the time spent building trust, sensory memory, and perceived consistency across channels. Trust isn’t an announcement: it’s coherence you feel. It’s a reassuring warmth, the texture of a material that invites a specific action, a sound that supports without intruding, a scent that anchors a memory to a place and a moment in time. They leave traces of values in our bodies, more reliable than any manifesto.

There’s one lingering question that keeps us honest: what, out of all of this, preserves our humanity? Ironically, it’s the very decision to treat the body as the first technology. If the body is the original interface, then design goes back to being a choreography of encounters—between matter and meaning, between local and global, between measured signals and lived senses. When it breaks free from the shackles of imitation and steps into the world, experience-driven AI can contribute to this choreography: not to decide for us, but to broaden the field of what we can observe, connect, and make kinder.


In a northern European context, an experience may speak softly, with clear light and spacious pauses; in the Mediterranean, it will be warmer and closer; in a region accustomed to engineering precision, it will make the rationale behind its gestures explicit. It’s the same narrative, but it’s delivered at the right distance and with the right rhythm.


That’s why I keep seeing a peony. Not as some romantic decoration, but as a tool to work with.
It reminds me that every message, in order to be real, has to pass through a body. That the translation we need is not word for word, but experience to experience. That innovation is not about turning up the volume, but calibrating more finely. And that the task for brands today is to teach technology good manners: to ask, wait, apologize, and step aside when the context calls for it. At that point, the peony ceases to be an image and becomes a test for our conscience. Are we designing things that can actually be felt? Do they leave a trace that words alone can’t capture? If the answer is yes, then we’re on the right path. Not an AI that claims to understand flowers, but an ecosystem where technology learns how to behave and humans rediscover their competitive edge: shaping sense through the senses.
After all, it comes down to this: less noise, more presence. And a scent that lingers.

In the end, when I see a peony, what I see is a simple flower, one that, across different cultures, symbolizes a value that grows from within. This is the challenge I face today as a creative: to create something that unfolds like a peony, layer after layer, intention after intention, and then to let it go when the season changes. Leaving behind seeds of innovation, value for brands, and the ability to amaze the people who come after me. Because the aim isn’t to last forever.

What matters is blooming at the right moment. Yes, at Translated “We believe in humans,” and more and more “We live as humans.”


Patrizia Boglione

Patrizia Boglione

Brand & Creative VP at Translated

Patrizia Boglione, brand strategist, cultural intelligence specialist, creative education designer. Actually, she works as a Brand and Creative VP at Translated where she focuses her activity on strategy, branding and cross-culture intelligence. She developed her career in communication at McCann Erickson, in branding at Angelini Design, where she was a strategic brand director working for the Italian, European, and Asian markets. Currently, she designs education programs on hybrid creativity, trend research, and cultural intelligence.

References

  1. Why AI language models like ChatGPT & Gemini can’t understand flowers like humans do, Designboom (June 4, 2025).
  2. Qihui Xu, Yingying Peng, Samuel A. Nastase, Martin Chodorow, Minghua Wu & Ping Li, “large language models without grounding recover non-sensorimotor but not sensorimotor features of human concepts,” Nature Human Behaviour 9, 1871–1886 (2025).
  3. June 4, 2025: Can AI understand a flower without being able to touch or smell?, Ohio State University College of Arts and Sciences (June 4, 2025).
  4. G. G. Scott, A. Keitel, M. Becirspahic, B. Yao, and S. C. Sereno, “The Glasgow Norms: Ratings of 5,500 words on nine scales,” Behavior Research Methods 51, no. 3 (2019): 1258–1270.