Research
Martina Ardizzi
Neuropsychologist researcher
Martina Ardizzi obtained her Master degree in Neurosciences and Neuropsychological Rehabilitation at the University of Bologna (Italy) in 2010. In 2014 she achieved her PhD in Neuroscience at the University of Parma, Italy, under the supervision of prof. Vittorio Gallese, studying the effect of childhood maltreatment on intersubjectivity development in Sierra Leone. At the moment, she is fixed-term researcher at the Department of Medicine and Surgery – Unit of Neuroscience of the University of Parma.
A Fundamental Distinction
Every time we engage in a dialogue with generative AI, our brain has to do something very concrete: process a sequence of words as information and often as an interaction, by linking together linguistic, attentional, and social networks. In recent years, the number of studies seeking to understand the relationship between brain function and generative AI has increased, enabling us to highlight certain observations and begin to explore various plausible hypotheses in greater depth.
First of all, a distinction between brain response and brain adaptation emerges from this recent research. By “brain response” we mean contingent and functional plastic changes, typically occurring over the short or medium term. In other words, it refers to how our brain alters its functioning in response to interaction with a technological tool, with the aim of improving the interaction between our biological system and the objects we use to act and communicate in the outside world.
About ten years ago, a study of touchscreen users and non-users caused quite a stir (Gindrat et al., 2015). The study showed that, compared to non-users, users of touchscreen smartphones had greater cortical somatosensory responses to tactile stimulation of the thumb and fingers, proportional to the intensity and even the recency of use, indicating use-dependent plasticity that rapidly updates the brain’s mapping of the hand.
This is nothing extraordinary, really: we already know from “classic” studies that the brain reshapes its maps based on what we use most. In primates, for example, repeated training of the fingers on moving surfaces causes the brain to allocate more space to the areas that represent those fingers, effectively making them more central to perception (Jenkins et al., 1990). But it had not occurred to anybody that over the course of several generations, if these primates continued to interact with moving surfaces, they would be born with progressively larger fingers.
What we now consider to be part of the human mind has evolved through the interaction between our brain systems and the technology we have always created.
Yet in part, these were the kinds of comments that followed the publication of the article on smartphone users: in newspaper articles, debates, and many an online post, people wondered whether repeated and frequent use of these devices would lead to biological adaptations that would cause us to be born with larger and larger fingertips. In other words, a normal and well-established brain response was reinterpreted as a long-term adaptation.
By “adaptation” we mean stable, long-term changes (potentially even structural ones) that permanently alter the training of certain skills and, over time, the trajectory along which those skills develop. It is not far-fetched to imagine a long-term relationship between the brain and technology. If we look at the evolution of our species, it becomes clear that, over time, what we now consider to be part of the human mind has evolved through the interaction between our brain systems and the technology we have always created (Ardizzi, 2025).
What Happens After we Interact with LLMs
Although we can therefore hypothesize a specific evolutionary trajectory emerging from the presence of generative AI in our modern ecological niche, we must remember that the neuroscientific studies conducted so far are capable of highlighting primarily responses, rather than brain adaptations.
Large language models are becoming tools for conducting scientific experiments on language: their ability to predict word sequences converges with what we observe in the language networks of the human brain (measured, for example, using fMRI or intracranial recordings). The takeaway so far is not that “generative AI works like the brain.” The convergence primarily concerns aspects of language prediction and does not imply that the models possess an understanding of the world or of intentions. This distinction is also crucial in everyday experience: linguistic fluency can enhance the impression of reliability, but it does not guarantee accuracy (Waldrop, 2024).
In fact, when we converse with generative AI, our brain appears to employ at least two registers in parallel. On the one hand, there is the linguistic component: understanding a text, maintaining coherence, updating expectations, and integrating new information using prior knowledge. On the other hand, almost automatically, a “social” interpretation comes into play: we evaluate the other person, attribute competence, intentionality, and reliability to them, and adjust our trust accordingly. It is this dual dynamic that makes the experience so powerful and, at times, so ambiguous: we receive words that sound as if they were produced by an agent, even though we know that the underlying mechanism is not a human mind.
We also know that, in order to understand the dynamics between the brain and generative AI, the type of feedback we receive matters. When the chatbot provides metacognitive support (which encourages users to monitor uncertainty, check their reasoning, and reflect on errors), both learning outcomes and indicators related to processing and monitoring during the task change (Yin et al., 2025). Along the same lines is the MIT preprint (“Your Brain on ChatGPT”), which has sparked debate because it attempts to directly measure what happens in the brain during writing with or without the assistance of an LLM.
It is this dual dynamic that makes the experience so powerful and, at times, so ambiguous: we receive words that sound as if they were produced by an agent, even though we know that the underlying mechanism is not a human mind.
Using EEG, the authors compared three conditions (brain alone, search engine, and LLM) and found differences in connectivity and engagement indices during the task; they also reported that when the tool is removed from individuals who have used it for a long time, some signals remain lower, suggesting a possible functional “inertia” (a sort of subthreshold of engagement) associated with prolonged delegation. This is an intriguing and potentially important finding, but it should be presented with the necessary caution, as the paper has not yet been peer-reviewed and has already been the subject of critical methodological comments calling for more conservative interpretations (Kosmyna et al., 2025; Stankovic et al., 2025).
If we want to broaden our perspective and try to imagine our future evolutionary trajectory, the most honest question is not “Is generative AI good or bad for the brain?” but rather “What type of cognitive activity are we training when we use it?” If we use it to minimize effort (ready-made answers, delegated writing, and minimal checking), it is to be expected that the brain will tend to conserve energy and reduce engagement. If we use it as a sparring partner (for counterarguments, alternatives, requests for sources, and clarification of limitations), it can become an amplifier of metacognition and understanding. In other words, what is at stake is not just what generative AI “does” to the brain, but what it prompts us to do—or not to do—with our thinking.
References:
Ardizzi, M. (2025). L’algoritmo bipede. L’avvincente storia di come mente, corpo e tecnologia evolvono insieme. EGEA. ISBN: 9791222930206.
Gindrat, A.-D., Chytiris, M., Balerna, M., Rouiller, E.M., & Ghosh, A. (2015). Use-dependent cortical processing from fingertips in touchscreen phone users. Current Biology, 25(1), 109–116.
Jenkins, W. M., Merzenich, M. M., Ochs, M. T., Allard, T., & Guíc-Robles, E. (1990). Functional reorganization of primary somatosensory cortex in adult owl monkeys after behaviorally controlled tactile stimulation. Journal of Neurophysiology, 63(1), 82–104.
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task (preprint). arXiv:2506.08872.
Stankovic, M., Hirche, E., Kollatzsch, S., & Doetsch, J. N. (2025). Comment on: Your Brain on ChatGPT… (preprint). arXiv:2601.00856.
Waldrop, M. M. (2024). Can ChatGPT help researchers understand how the human brain handles language? Proceedings of the National Academy of Sciences, 121(25), e2410196121.
Yin, J., Xu, H., Pan, Y., & Hu, Y. (2025). Effects of different AI-driven chatbot feedback on learning outcomes and brain activity. npj Science of Learning, 10(1), 17.


