Translated's Research Center

What happens in our brain when we switch from one language to another?

Culture + Technology

Imminent meets Martina Ardizzi – neuropsychologist and currently a research fellow at the Department of Medicine and Surgery of the University of Parma where she conducts and coordinates research projects on the neurobiological basis of intersubjectivity and social cognition – about what happens in our brain when we switch from one language

Martina Ardizzi

Martina Ardizzi

Neuropsychologist researcher

Martina Ardizzi obtained her Master degree in Neurosciences and Neuropsychological Rehabilitation at the University of Bologna (Italy) in 2010. In 2014 she achieved her PhD in Neuroscience at the University of Parma, Italy, under the supervision of prof. Vittorio Gallese, studying the effect of childhood maltreatment on intersubjectivity development in Sierra Leone. At the moment, she is fixed-term researcher at the Department of Medicine and Surgery – Unit of Neuroscience of the University of Parma where she coordinates national and international research projects examining how we represent our body in space devoting particular attention to the role of early traumatic experiences and psychiatric diseases. Recently, her research interests have been extended to the neurophysiological correlates of aesthetic experience. To pursuit her research interests, she has spent some time abroad. In particular, she has been a visiting researcher at Assam University (India), University of Essex (UK) and Berlin school of mind and brain (Germany). Martina Ardizzi also collaborates as lecturer with various university, companies and training institutions and she is author of several publications published in top international neuroscientific journals and also books chapters.


Martina Ardizzi’s answers offer very interesting insights not only about what happens at a neurological level when the human brain performs a “switch” from one language to another, but also about the impact of and the difference between humans and machines in the translation process.

She explains how there is one primary consensus among scholars on the process of translation in the human brain. This is the idea that translation does not activate a language-specific network in the brain; instead, it results in the activation of a series of specific and general domain networks. This joint activation of different areas relating to executive functioning and to non-language specific abilities ultimately creates the process of translation.

When discussing the impact that machine mediation could have on the translation process, she focuses on anthropomorphism, i.e. the tendency to attribute human characteristics to non-human agents such as machines. She states that research has shown that humans are more trusting of technologies that have human features such as a voice, a name or a face. She therefore concludes that the more biologically similar an action or emotion is, the more the human brain will not distinguish human from non-human translation.  

Lastly, professor Ardizzi discusses the way our brain receives information, how human somatosensory motor cortices work, and how humans are able to process both physical and figurative meanings. She touches upon the embodied model of the mind, which applies the theory of high functioning to our embodied experience and our relation to the world as being primarily mediated by our body. 

1. What happens in our brains when we switch from a language to another?

“What happens in our brain when we are translating is something more related to our ability to join the activity of several and different brain regions related to executive functioning, and unspecific, non-linguistic-specific ability, that we put together.”

2. What impact can the mediation of a machine have in this switch? 

“I think that we don’t have a proper answer yet. But I think that to face this issue, probably our growing knowledge is about what kind of human abilities or human capacity we are willing to attribute to non-humans, so to “machines”, could be very eye-opening.”

3. Why are human beings faster than machines in understanding simple things like “a cup of tea”?

“Language development, language production obviously, but also language comprehension, is strongly related to my somatosensory motor cortices. So in a way, the semantic access to a meaning is not solemnly related to linguistic regions, but also to the the motor activation, or the somatosensory activation related to the word that I’m saying.”


Interview script
1. What happens in our brains when we switch from a language to another?

That is a very challenging question, because even if what happens in the brain during translation has been a matter of debate and a field of interest for over fifty years, we still know very little about what exactly happens in our brain.
And that paucity of evidence is mainly due to the difficulties inherent to the translation of translations in the lab. I mean to put translation in the lab, in a way.

When we study this complicated and high-level cognitive function in the lab, we need to do a lot of adjustments that sometimes reduce our ability to predict what happened in ecological tasks.So just to give you a picture, for example, we need to use short sentences instead of long discourses.

We also need to control a lot of variables: the level of expertise of the translators matters.

Also the source and target language make a difference. So it is different to translate from English to German, and vice versa, obviously or from English to French, for example. Another thing that we should remember is the fact that when we look at the brain at work, we always need to compare two different tasks because our brain is always active, until we die. Luckily, it is always active. So I can’t see the brain region related to translation per se. I can just have a look over the brain regions that are more activated during a translation than during another task.

For example, while I’m speaking my native language, for example. That is important because it means that there is always a comparison, and the control task, the one that I used to set the baseline, matters a lot and makes a difference a lot of the time.

That said, we all know something otherwise this interview makes no sense. I think that the thing over which we have the greatest agreement among scholars is the fact that translation is not supported by a language-specific network in the brain. Rather, translation is supported by the activation and specific and general domain networks. Obviously when the context demands it, language taps this network. This unspecificity is very common in our brain, and it means a lot.

We probably have the opportunity to discuss this further.
But it means that we cannot expect to see a specific region of the brain strongly involved or differently involved in translation than during monolingual speaking, for example.

Differently, what we will see is the activation of a series of other cognitive networks related to, for example, self monitoring, verbal bulk memory, the shift, the ability to cognitively shift from one language to another, also, the ability to suppress my overlearn response in favour of another one, and so on.
What we know is that during that translation, we have a stronger involvement of the temporal lobe for the access to the meaning.

We also have the crucial activation of the frontal-temporal network related to verbal and phonological work memory, and also we have a lot of motor cortices.
For example, the basal ganglia, the cerebellum, also some frontal-parietal regions, related to the motor plan. Not just for the motor output of the language because obviously language is predicated. So we need a motor output all the time. But as we see later, this motor compartment of the network is also crucially involved in the access to the meaning of some specific questions and some specific items in language.

Another thing that we know, and I think it is very interesting is the fact that non-expert translators require a greater activation in the brain than expert translators. We have few, but very well-done longitudinal studies, looking at the brain activation during a training, starting from a low level of expertise, and reaching a high level of expertise in translation. And all the time, the activity in the brain became more tuned to the task. So there is less energy cost for our brain while we are improving our expertise in translation.

This makes a lot of sense.

We know that this is the same for a lot of cognitive function. So when we are experts, we need to recruit a low level of our circuitry and areas in the brain, because we know very well how to do something. So we have the ability to save energy and resources to do the same thing.

To sum up, we know a few of but I think that what we know is very eye-opening and insightful about translation. I also think that the fact that translation is not related to specific or strict language processing is very insightful and important to take into account. What happens in our brain when we are translating, something is more related to our ability to join the activity of several and different brain regions related to executive functioning, and unspecific, non-linguistic-specific ability, that we put together. So we allow the emergence of this complex ability that is translation.

2. What impact can the mediation of a machine have in this switch? 

Actually, I think that we don’t have a proper answer yet. What happens in our brain while we are facing translation made by an artificial intelligence algorithm for example. But I think that to face this issue, probably our growing knowledge is about what kind of human abilities or human capacity we are willing to attribute to non-humans, so to “machines” could be very eye-opening, I think.

In psychological terms, anthropomorphism is the human tendency to attribute human abilities, human internal states or human capabilities to non-human agents. In the last decades, we learnt a lot about the neurobiological underpinning of this human tendency. We know, for example, that at least at the first meeting – we don’t have data about what happened in our brain after a prolonged or long-lasting human-robot interaction -. But we know that at the first meeting, when we observe robots making some actions, or expressing some kind of emotions, if they have a face obviously, we recreate the same brain region that we use when we observe other humans making the same gestures, or expressing the same facial expressions of emotions.
Obviously, part of these regions are related to the mirror mechanism – not all the regions, but most of them.

But that is not my point here. My point here is that when I look at somebody doing something for my brain, it doesn’t matter if the agent is a human or non-human. If the action or the emotion has a biological likeness I recruit the same brain regions. Obviously, this task doesn’t activate exactly the same network. For example, when I look at a robot doing something, there is also a greater activation of my occipital areas. That is probably because I need a better or more precise visual processing of the scene to grasp what happened. But the thing that we have to remember is that we recruit the same motor and sensory, motor and visual motor regions when we look at a robot or at a human agent.

Even if anthropomorphism is a widespread tendency, a lot of data shows how we are more prone to attribute our human capability to robots that are human-like, the proximity and the similarity that we feel with the robot means something, and stimulates and pushes more anthropomorphism tendency.

After all these data and results in the last few five years, there were a lot of amazing studies trying to understand anthropomorphism towards artificial intelligence that are probably the most disembodied, non-human agents that we can imagine. These studies are very cool. They investigated a lot of specific cases, for example virtual or self-driving. That was a matter of debate: the car accidents that happen with a virtual driver. Who do you blame? The driver, the virtual driver, the human driver: whose fault is it? The studies demonstrate that we mostly blame the virtual driver, and we use more internal factors when car accidents happen with a virtual driver. The studies also demonstrate how we tend to attribute more trust to a virtual driver who is more like humans.

The similarity also influenced our feeling of trust over an artificial intelligence algorithm able to drive our car. That is why a lot of companies start to anthropomorphise our virtual driving and virtual drivers. For example, they attribute a proper name to the virtual driver. They also add a human voice or some bodily features, so the hands or face and so on. That has an impact on the trust over these disembodied entities able to do human things.  So I don’t have an answer.I have to admit it, obviously.but I think that to try to substantiate this question with insights from neuroscience. There are two points that I outlined a few seconds ago, that are important. 

First, that we tend to attribute human capacities to non-human agents. In this process, the human likeness, the fact that I perceive as similarities, over the non-human agents matters could probably be very informative in the field of translation, when I can attribute the status of a translator to an artificial, intelligent, algorithm which translates my sentence from one language to the other. We probably need to start questioning how we can improve the likeness of translators, the digital translators, with humans to foster some anthropomorphism, also a tendency in this field.

3. Why are human beings faster than machines in understanding simple things like “a cup of tea”?

Yes, as my previous answer probably already suggested, I think that the evidence that there is a strong involvement of my motor cortices in linguistic processes is in a way, Pandora’s box for translation. Today, there is this embodied approach to language.
According to this approach, we know that language development, language production obviously but also language comprehension is strongly related to my somatosensory motor cortices. So in a way, the semantic access to a meaning is not solemnly related to linguistic regions, but also to the the motor activation, or the somatosensory activation related to the word that I’m saying.

Just to give you an example, if I listen to the word ‘grasp’, so a hand-related gesture, a verb that identifies a hand-related gesture obviously, I respond with my linguistic region, I do a phonetic conversion and so on, of course. But there is also an activation of my motor cortex, particularly the somatotopic region that I actually used to move my hand and grasp my cup in the real world.The verb that is transferring, obviously in a symbolic way, activates my motor cortex and facilitates the access to the concrete meaning that is grasping something through the activation of my motor cortices. That is extremely interesting. We are so rooted in our body in a way that we can’t even imagine.

But neuroscience starts…I liked this bodily rooting of our high-level functions. The fact that I activate my motor cortex in response to, for example, the verbs of action and that is the same, for example, if I listen to “I kiss”,”I want to kiss you”,”I want to hug you”, “I want to kick you”.

All the time, I activate my motor cortices in a somatotopic way. The motor cortex that I will use to actually kiss and kick you. in the real world. But that kind of somatomotor activation is not restricted to concrete verbs, for example. We also use our somatosensory cortices, or even motor cortices, to access figurative meanings.

For example, if I say to you “Luca, that man is oily, I don’t trust that man. I don’t feel comfortable when he comes into the room. I don’t like his behavior and I think he pretends to be someone that he isn’t.” All these figurative, all these concrete meanings can be translated in my metaphor. So I said directly, “Luca: that man is oily.” And you suddenly grasp what I want to say and you activate your sensory motor cortices, the sensory motor cortex that you would use if you actually had this oily sensation under your fingers. That helps you to substantiate and understand what I want to say. I think that this is amazing because it means that we also use our body to learn how to say something, to transfer symbolic and abstract meaning.

We use our deeply embodied experiences to be in touch with others, because we share those common physical experiences. Obviously this approach to language growing up in the current model of mind, currently, we have this embodied model of mind. Basically it relates a lot of high functioning to our embodied experience and our relation to the world that are primarily mediated by our body. During the history of science, and in neuroscience specifically, we have had a lot of different models of mind.

For example, before this embodied model we had the computational model of mind, according to which we can code cognition and consciousness and even the mind in algorithms, if the algorithms were good enough. So I don’t think that this embodied model of mind is the best that we can have, or that it won’t change in the future.

For sure it will. But I think that after so many models and so many researchers, we need to be astute enough to ask the best question to the models that are in force at a specific time. I think that what we can learn thanks to the embodied model of mind is what kind of linguistic process an artificial intelligence that is totally disembodied can have. The fact that our algorithm,  that we use to translate my sentences doesn’t have fingers, so it doesn’t feel the oily sensation under their fingers.How these embodied entities and this disembodied experience can impact the way it translates my oily words and my oily meanings. It matters, or not, and what can happen if I can have this kind of bodily related experience in the process of virtual or digital translation.