Technology
There is life after large language models. Artificial intelligence didn’t start with them, but the light of their success was so dazzling that it erased perspective and made it seem as though the history of artificial intelligence had come to an end. But reality is always more complex than it seems. Billions of dollars are invested in LLMs. But alternative projects are also in place and suggest that LLM development is not the only trajectory that can lead into the future. In the evolutionary process of artificial intelligence, different forms and paradigms are being tested. According to different scientific leaders, a new frontier can be found around concepts like experience in AI. And the industry is already at work exploring this opportunity.
In this article, we explore the possibilities for a form of artificial intelligence that is not fed only by recorded and archived data, but also learns from real-time data acquired through sensors, cameras, radar, and any other tool for monitoring physical reality. When models like these are used to learn from direct experience of events in the physical world and to interpret space, their structure is at least in part new, and so, in turn, can their future. Thus, we ask ourselves whether simply making Large Language Models bigger is the only path to progress, whether we should expect new major scientific
breakthroughs, and whether what will matter more is the money and technology of today’s Big Tech or the imagination of scientists, companies, and users.
Of course, the foundation of all the models we describe as artificial intelligence remains that of neural networks and deep learning techniques. But the structures of the models themselves may differ profoundly. Large language models are neural networks trained mainly on text to generate natural language, yet there are other, very different intelligent architectures.
We ask ourselves whether simply making Large Language Models bigger is the only path to progress.
Tesla’s AI, for instance, is mostly made up of: large computer-vision neural networks that analyze images from cameras; models for perception, which are necessary for recognizing cars, lanes, pedestrians, traffic lights, etc.; models for planning and control in order to decide steering, throttle, braking, etc. Tesla uses advanced architectures—including transformer-type networks—and a huge “end-to-end” neural network that takes camera video as input and outputs driving commands, but it is not a language model: it doesn’t “read” or “write” texts—it drives. And Tesla has a cheaper architecture than some competitors: other self-driving cars come with even more sensors and radars that make their intelligence even more able to manage complex forms of real-time information.
Another approach inspired by the idea of Experience in AI can be found in the context of “maritime intelligence.” These are models that are used to help the captains of ultra-luxury yachts steer their vessels. A company like Sail ADV, together with its subsidiary D.gree, is working on a form of “deterministic artificial intelligence” made up of a system of equations that describes the vessel’s operating characteristics in a fixed, transparent, and rigorous way, enriched with a deep learning component to interpret the massive amount of data on external conditions during navigation temperature, wave motion, wind, currents, port structures, proximity of other ships, satellite data, cameras, radar, and so on.
A line of development that could bring language technology and spatial recognition closer together may well emerge from the translation business: devices are already in production —earphones or glasses— that listen to what two people say and translate it into their respective languages. This can be augmented with cameras to leverage facial expressions and gestures, improving the comprehension of the sentences to be translated. Additional sensors can help the AI systems used in translation better understand context. But these are only the current ideas: one can imagine these applications advancing in surprising ways.
A line of development that could bring language technology and spatial recognition closer together may well emerge from the translation business: devices are already in production —earphones or glasses— that listen to what two people say and translate it into their respective languages.
Maybe the most complex example of Experience in AI is in the domain of meteorology. The hyper-elaborate and detailed system of equations in traditional weather models, which reproduced the full intricacy of physical conditions in natural environments, is now being compared with meteorological models based solely on artificial intelligence.
As one of the pioneers in the field, Richard Turner, explained in an interview with Imminent, these AI models appear less costly and more efficient under most normal conditions, though they sometimes seem less able to fully forecast extreme events. And as Alex Waibel also notes, again in this year’s Imminent, not all spatial models are capable of recognizing every physical condition they may encounter—to the point that people say robots will, for a long time yet, be unable to outperform human plumbers when working on the many different types of piping found in different homes.
The design of these technologies will take into account the experiential specificities that give them meaning. Experience in AI could lead to a set of models embodied in objects that interact with their surrounding environment; its development therefore depends on the co-evolution of models, bodies, and contexts. These contexts, in turn, arise from the accumulation of technological, cultural, economic, social, ecological, and other forms of experience. All of this inevitably influences the design of the models themselves, as well as the objects that must embody them and enable their interaction with space. In short, the quality of spatial models will depend both on the sophistication of the artificial intelligence and on the conditions of the space itself: models applied to predominantly artificial environments, with quite average forms and infrastructures, such as a hyper-modern city, will be quite easily deployed, while they could be less likely to work in contexts dominated by the immense variety of natural conditions, with biodiversity developing in very different ecosystems.
Experience in AI could lead to a set of models embodied in objects that interact with their surrounding environment.
The debate on the strategic context
This development fits within a strategic landscape shaped by a complex debate, one that spans corporate and geopolitical considerations, financial and scientific issues, comparisons among alternative value systems, and disputes over environmental compatibility. The emergence of Experience in AI is one chapter in a broader and deeper transformation, intelligible only with a mindset oriented beyond the constraints of the present, aware that the aim is not to make predictions, but to prepare for alternative futures.
The scope of the debate is broad and profound. From the history-of-technology point of view, one could compare the present transformation to the big infrastructural leaps of the past: the building of railways, electricity, the internet, mobile phones. On the scientific front, artificial intelligence comes with a major cognitive challenge, reshaping how we understand the workings of human knowledge itself, and could be a reminder of something similar to the development of neuroscience. The economic, political, technological, and scientific driving forces for such a transformation are strong. But precisely because it runs so deep, AI’s evolution is also driven by the imagination of the people who design and adopt it—an imagination shaped in turn by the grand narratives we use to interpret today’s world. As the economist and Nobel laureate Daron Acemoğlu has shown, what’s involved in building a perspective about AI are the narratives that guide human collective beliefs about progress, power, markets, sustainability, and more.
Judging by the statements and actions of the leaders driving the development and adoption of artificial intelligence in the West, two categories of stakeholders seem to exert particular influence over the deliberations that set the agenda for collective decision-making in this area:
- The leaders of mega-tech companies, such as Nvidia, Alphabet, Microsoft, Amazon, Tesla, Oracle, Meta, OpenAI, Anthropic, and others, who invest in developing the technology.
- The leading scientists who played a decisive role in advancing the foundational ideas behind today’s AI, such as Fei-Fei Li, Yann LeCun, Stuart Russell, Geoffrey Hinton, Yoshua Bengio, Alex Waibel, and Gary Marcus.
Western Big Tech leaders now seem convinced that massive investments in building infrastructure for generative AI are destined to transform the economy. They are asking governments to let them innovate without regulatory constraints so they can secure oligopolistic market positions, while in return, promising supremacy over systemic rivals in China.
On the other hand, scientists such as those mentioned above are convinced that the current paradigm guiding AI development is not sufficient to ensure the creation of technologies that can: operate intelligently across diverse contexts; learn and function with less data and lower energy use; reduce errors, improve safety and privacy, respect copyright, and produce trustworthy results; and work with real-time data to develop interactions with the physical environment that are sufficiently flexible and effective.
One can perhaps discern a fundamental rift between Big Tech leaders and leading scientists. The techno-capitalists essentially believe that the future of artificial intelligence chiefly requires ever more colossal investments to build ever more powerful infrastructure, data centers, and ever larger models capable of processing amounts of data equal to everything recorded digitally, and more. The scientists who have shaped the history of AI as a discipline believe that what is needed are new scientific leaps and far greater awareness of the limits and the depth of the consequences of the results achieved so far.
AI’s evolution is also driven by the imagination of the people who design and adopt it.
Between those different opinions, the other stakeholders adapt, worry, show uncertainty, and at times, though rarely, assert themselves: financial investors follow the money; entrepreneurs adopt artificial intelligence in their companies with some caution; citizens use it as consumers and worry about it as workers; and politicians, who must interpret everyone’s interests, as well as their own, regulate the technology’s development and set policies to steer innovation and adoption, taking into account the opportunities and systemic risks associated with this major technology.
In this context, certain issues seem rather stuck. What are the right laws for regulating artificial intelligence? Are the closed, American-style models smarter than the open models from China—and partly Europe —that consume less? How can privacy and copyright be safeguarded in the face of generative models? What is the best way to use these generative models? What will happen to jobs that appear replaceable by AI? These questions are asked obsessively and elicit repetitive, seldom satisfying answers. Yet in the new landscape taking shape with Experience in AI, this debate may be redefined.
The experience approach leads to an intelligence that emerges from a perception/action loop between a body, its sensors, and the environment, rather than from disembodied symbol processing.
In fact, it marks a paradigm shift from symbol-processing systems to agents that perceive, act, and adapt in the real world. It unites cognitive computation and embodied agency, so that intelligence is expressed not only through inference, but through motion, interaction, and physical consequence. While disembodied AI is about text, images, or static data streams, the new approach works within a flux of data coming from environments where perception-decision-action loops are linked to safety, ethics, and material dynamics.

In such systems, from a scientific point of view, cognition is inseparable from embodiment. From a technological point of view, new agents integrate multimodal sensing (vision, audition, haptics), adaptive control, and context-aware planning within closed feedback architectures that continually align intent with outcome, so that each perception alters the world and each actuation reshapes the context for subsequent cognition. From an architectural point of view, Experience in AI is grounded in natural law, with differentiable physics engines, force-aware control loops, and causal world/model learning replacing abstract statistical mappings with embodied reasoning constrained by thermodynamics, kinematics, and topology: where conventional AI interprets reality, experience is part of it.
While large language models (LLMs) have mastered the digital world of text and code, embodied systems that perceive, reason, and act in the material world—becomes a “new frontier” of artificial intelligence. As scientists put it, an autonomous car is not merely software on wheels: it is intelligence emerging from a closed perception/action loop where a mistake doesn’t just produce a hallucinated fact, but a broken object or a physical crash. The same can be said about a robot that is not able to fill a dishwasher.
In this context, responsibilities are even heavier than in disembodied AIs: applications span precision surgery, industrial co-manipulation, autonomous mobility, and environmental robotics, domains where erroneous predictions can create physical hazards. Preparing the future means organizing in a context in which embodiment itself becomes the substrate of cognition, and Experience in AI marks the inflection point from reactive automation to proactive, context-sensitive reasoning, with machines that learn from and reason about their environments. With this paradigm shift, choices are open to different possibilities and human cognitive abilities become more important than automatic, financial, and technical predefined patterns.
In fact, Experience in AI marks a paradigm shift from symbol-processing systems to agents that perceive, act, and adapt in the real world.
The implicit questions are always the same: how much will the scale of investment matter in shaping possible and preferable futures, and how much will be decided by scientific research? If fresh scientific leaps are likely in the most important areas of AI development, is it then possible that regions strong in science but weaker financially, such as Europe, could regain ground or write their own version of the history of artificial intelligence? Will AI be designed to replicate what humans do, or to do what humans do poorly? And in the latter case, how will we cultivate the imagination needed to design what has so far seemed unthinkable?
In this field, which is centered on robotics in all its forms and goals, it is likely that a lot of human cognition is still needed to make desirable progress.
To reach truly trustable that make the most of Experience in AI, we first need scientific breakthroughs in grounded world models that genuinely understand and predict real-world physics and causal structure, rather than just statistical patterns. To be clearer: grounded world models are internal models that an AI builds that are tied to the real world it senses and acts in, not just to symbols or text. We also need a unified theory that combines deep learning with control theory so that complex, contact-rich robots can be provably stable, robust, and safe, plus a principled understanding of the sim-to-real gap and how to adapt safely online. On top of that, we must develop solid models of human behavior and social norms so robots can act in ways that are legible, predictable, and aligned with human expectations.
Technologically, this calls for physically grounded foundation models that fuse vision, language, 3D perception, tactile and force sensing, and control, with safety constraints built in from the start. We need massive, realistic simulation and diverse robot datasets, including rare but critical edge cases, and a new generation of sensors, compliant actuators, and efficient on-board compute for reliable real-time autonomy. Finally, we must create engineering methodologies, standards, and verification and certification tools that allow learning-based robotic systems to be tested, audited, and approved for safety-critical use.
And in the latter case, how will we cultivate the imagination needed to design what has so far seemed unthinkable?
These are only a few hypotheses, and certainly not enough to provide a complete picture of the challenges opened up by Experience in AI. But they make it clear that a path defined solely by increasing computing power will not be sufficient to achieve the goals at hand. In fact, the pathways of artificial intelligence are multiple and complex. The scenarios proposed here serve only to bring order to these issues by suggesting a classification of the phenomena we can expect under the various hypothesized conditions.
A four-scenario framework
Innovation in artificial intelligence is likely to follow several different paths of development. To describe the variety of possibilities, we can imagine several scenarios. The main variables that can form the interpretative framework for these different scenarios are:
- The source of value: this may lie either in the increase of computing power in the machines designed to handle artificial intelligence, or in scientific research that enables the discovery of new solutions for the creation of paradigmatically new models. Thus, one can say that the sources of value that innovation can deliver are “brute force” and “research.”
- The source of data: this may consist of the vast archives of data stored across all the digital platforms that make up the environment of everyday life in developed countries, or it may be the set of real-time data, gathered from the multitude of sensors and cameras of every kind, that record what happens moment by moment. Thus one can say that the sources of data that innovation can use are “archive” and “experience.”
In this way, four scenarios quickly emerge:
- Archive and brute force.
Innovation occurs by increasing computing power and using ever more stored data. - Experience and brute force.
Innovation occurs by increasing computing power and using ever more real-time data. - Archive and research.
Innovation occurs through new scientific discoveries and by using ever more stored data. - Experience and research.
Innovation occurs through new scientific discoveries and by using ever more real-time data.
In each of these four quadrants, frontier topics can be found on which significant innovations may develop.
Frontiers of innovation

Hypotheses on four alternative frontiers of artificial intelligence, organized by the main driver of innovation: compute power versus scientific research, and archived data versus real-time data.
Scenario 1: Archive and Brute Force
In this scenario, innovation in artificial intelligence continues to be driven primarily by increases in computing power and the expansion of stored data archives. Progress does not come from fundamentally new scientific breakthroughs or paradigm shifts, but rather from scaling up existing architectures, refining data-management techniques, and integrating these systems into everyday tools and environments.
Core dynamics
The underlying logic of this scenario is quantitative acceleration.
- Computational resources become cheaper and more powerful, enabling the training of ever larger and more capable large language models (LLMs) and multimodal systems.
- Data accumulation grows exponentially, as decades of digital activity—documents, videos, social media, transaction logs, and sensor readings—are continuously archived and made accessible to models.
- The main driver of innovation is thus brute computational force applied to an ever-expanding data corpus: more parameters, larger context windows, longer training cycles, and improved optimization.
Technological focus
In this context, LLMs remain at the center of innovation. Advances arise from:
- Agent orchestration: multiple LLM-based agents collaborate, delegate tasks, and reason collectively to achieve complex objectives, often integrated with enterprise software ecosystems.
- Business efficiency and cost reduction: companies deploy AI systems to automate workflows, analyze archives of corporate knowledge, and generate reports, code, and customer interactions with minimal human intervention.
- Wearable and embedded devices: AI capabilities extend into personal assistants built into smart glasses, watches, and other connected devices, leveraging a cloud-based computational backbone to deliver seamless contextual responses.
- Voice and multimodal interfaces: natural conversation becomes the primary medium for interaction with machines, as LLMs gain access to visual, auditory, and textual archives simultaneously.
Societal and economic implications
In the “Archive and Brute Force” world, innovation is concentrated in large technology ecosystems that control the necessary computing infrastructure and data reserves. Efficiency and convenience improve dramatically across business and daily life, but the model also risks reinforcing centralization, data dependency, and high energy consumption.
The pace of progress feels steady and impressive, but also predictable: systems become faster, broader, and more fluent, yet not necessarily more intelligent in a human or scientific sense. Creativity and insight emerge from the sheer scale of computation rather than from new theoretical understanding.
Summary
This scenario represents a continuation and amplification of current trends: innovation by scale, speed, and data density, where intelligence expands not by reinventing itself, but by multiplying its reach. It is AI as infrastructure and optimization, a world of powerful archives and relentless computation.
Scenario 2: Experience and Brute Force
In this scenario, artificial intelligence evolves through the fusion of massive computational power with a constant stream of real-time data. Innovation does not emerge from new scientific paradigms, but from the capacity to process, interpret, and react instantly to a living world of sensors, cameras, and digital interactions. Where the previous scenario (“Archive and Brute Force”) builds knowledge from stored archives, this one feeds on continuous experience, or real-time data.
Core dynamics
The driver of innovation here is the instantaneity of information.
- Computing infrastructure reaches unprecedented levels of scalability, enabling models to learn and act continuously from live inputs.
- Networks of sensors, vehicles, satellites, and personal devices produce vast quantities of dynamic data, forming a digital reflection of the physical world: a global nervous system.
- LLMs and foundation models act as the interpretive layer of this system, turning raw sensory streams into decisions, predictions, and coordinated actions.
This scenario is defined by responsiveness rather than reflection: AI becomes a real-time organism, always observing, always adjusting.
Technological focus
LLMs remain the central foundation models, but are extended into domains where immediacy is crucial and innovation occurs through their integration with live data pipelines.
Examples include:
- Self-driving vehicles: real-time fusion of sensor data, road conditions, and predictive behavior modeling enables safer, adaptive autonomy.
- Military and defense systems: autonomous drones and decision-support systems use live feeds to coordinate complex operations.
- Smart infrastructure and cities: LLMs orchestrate logistics, energy flows, and emergency responses through constant data feedback.
In each case, brute computational strength and real-time sensory input pushes AI into continuous participation in human and environmental systems.
Societal and economic implications
In this scenario, AI becomes a nervous system of the planet. Consequences are:
- Radically increased efficiency and responsiveness in logistics, and urban management.
- Enhanced safety and precision in critical operations.
- The rise of adaptive services that anticipate user needs.
However, it also introduces profound ethical and governance challenges:
- Surveillance and control become pervasive.
- Decision-making shifts toward opaque, automated ecosystems, reducing human oversight.
- The speed of information outpaces the capacity of regulation.
The balance between efficiency and autonomy, safety and privacy, is the problem of this world.
Summary
“Experience and Brute Force” represents a future where intelligence is no longer static or retrospective, but situational and continuous. Innovation is driven by computing muscle and sensory immersion, as machines learn not only from what has happened, but from what is happening right now. It is a world of living data, where AI becomes an active participant in human experience.
Imminent Research Report 2026
A journey through the next generation of AI —the moment when machines begin to learn from and interact with the real world in real time.
The ultimate resource for understanding the next phase of AI innovation. Written by a global, multidisciplinary community. Designed to navigate a world where machines mature, learn from experience, and retain memory—while humans remain responsible for how AI acts in the world, and language continues to be the primary way intelligence is created.
Get Your Copy NowScenario 3: Archive and Research
In this scenario, innovation in artificial intelligence is driven by new scientific discoveries while continuing to rely on the vast archives of stored data accumulated over the past decades. Progress arises not from greater brute computational force, but from intellectual and methodological breakthroughs that make AI systems smarter, leaner, and more efficient. The emphasis shifts from scale to understanding— from raw power to conceptual elegance.
Core dynamics
The main source of value lies in research itself: in the creation of new ideas, and mathematical frameworks capable of extracting more meaning from the same or even smaller quantities of data. At the same time, the source of data remains the massive, structured archives of the digital world— scientific datasets, libraries, code repositories, and historical corpora. Innovation happens at the intersection of theory and archive: how to design models that can learn more efficiently from what humanity has already recorded. This scenario is guided by a principle of optimization and sustainability.
Technological focus
In this world, researchers and developers pursue new scientific frontiers rather than scaling existing models indefinitely. Key areas of progress include:
- Energy and efficiency optimization: finding ways todrastically reduce the power and cost needed to train and run AI systems.
- Open and transparent models: developing frameworks with open weights and auditable structures, fostering collaboration, trust, and reproducibility across the research community.
- Model specialization and right-sizing: moving away from one-size-fits-all supermodels toward an ecosystem where each model is optimized for specific types of tasks or contexts.
- New learning paradigms: advancing beyond current transformer-based approaches to discover new forms of reasoning, memory, and abstraction.
The result is a landscape where research laboratories, universities, and open consortia reclaim a central role in AI progress, alongside major corporate players.
Societal and economic implications
This scenario fosters a more sustainable and pluralistic AI ecosystem.
- Energy consumption and hardware costs declineas models become more efficient.
- Transparency and openness grow,counterbalancing the concentration of power.
- Interoperability increases: models of different sizes and functions can be composed and reused across sectors.
- Innovation becomes distributed, allowing smaller research centers and startups to participate in the field.
This world moves more slowly than those powered by brute force. Breakthroughs depend on intellectual creativity and collaboration, not on industrial momentum.
Summary
“Archive and Research” envisions a future where AI evolves through scientific ingenuity applied to the wealth of existing data. It is a world of elegant algorithms, open collaboration, and sustainable progress —where the next generation of models learns not by being bigger, but by being smarter: the rediscovery of the human spirit of research as the true engine of intelligence.
Scenario 4: Experience and Research
In this scenario, innovation in artificial intelligence emerges from new scientific discoveries combined with the continuous flow of real-time data.
It represents a world where AI evolves not by scaling existing models or relying solely on stored archives, but by reimagining the very paradigms of intelligence. Here, science and experience merge: research provides conceptual breakthroughs, and real-time data supplies the living material through which those breakthroughs come alive.
Core dynamics
The main source of value in this scenario lies in original research—the exploration of novel architectures, learning methods, and cognitive frameworks that go beyond today’s LLMs.
The source of data is real-time experience: information continuously gathered by sensors, robots, medical devices, satellites, and digital interfaces.
This dynamic creates an ecosystem where models don’t just learn from the past, they perceive, interact, and adapt in the present. Innovation is a scientific and experiential process.
It is a scenario defined by curiosity, experimentation, and imagination, where the limits of AI coincide with the limits of human creativity.
Technological focus
This world is the cradle of new paradigms of intelligence, where edge AI becomes more and more important. Innovation appears where continuous interaction and deep scientific inquiry converge:
- Multimodal models: capable of processing and integrating multiple simultaneous streams of data to create adaptive representations of the world.
- Robotics: intelligent systems that move, perceive, and reason in real time, bridging physical and digital environments.
- Health and medicine: AI capable of monitoring patients continuously, understanding complex biological feedback, and supporting preventive or personalized care.
- Satellite and planetary control: scientific models managing space systems, Earth observation, and environmental monitoring.
- Instant translation and communication: systems that dissolve linguistic and cultural barriers.
In this scenario, innovation stems from combining perception, reasoning, and adaptation across multiple data modalities.
Societal and economic implications
This scenario opens the door to a new wave of global competition and collaboration. Because it depends less on the industrial infrastructure and more on scientific creativity, regions and institutions that have not dominated the LLM era—such as Europe—may gain a second chance, while the regulatory challenges grow more complex: systems that perceive and act in real time raise profound questions about accountability, transparency, and human control. Yet the atmosphere of this world is one of possibility. The boundaries of AI are no longer determined by computation or data quantity, but by the imagination of scientists and engineers.
Summary
“Experience and Research” envisions a future where AI becomes a living science—an evolving discipline grounded in both real-time perception and conceptual discovery.
It is a world of multimodal intelligence, adaptive machines, and boundless creativity, where innovation is limited not by data or hardware, but only by the reach of human imagination.
Conclusion
To sum up: we can outline four scenarios.
1. Archive & Brute Force — Industrial optimization
- Innovation through scale, compute, and stored knowledge.
- Efficiency, automation, and integration dominate.
2. Experience & Brute Force — Real-time autonomy
- Continuous learning from live environments.
- Self-driving, drones, social networks, smart cities.
3. Archive & Research — Scientific efficiency
- New architectures, smaller and smarter models.
- Sustainability, transparency, open research culture.
4. Experience & Research — Creative intelligence
- New paradigms of multimodal, adaptive AI.
- Robotics, healthcare, communication, and exploration.
These four quadrants describe the strategic frontiers of AI innovation:
- From industrial acceleration to scientific reinvention.
- From reflection on archives to interaction with experience.
- From power to imagination.
Together, they could help define the future map of artificial intelligence—a space where the direction of progress will depend on which axis humanity chooses to push forward: brute force or research; archive or experience. These paths will likely all be pursued, but their impact will depend on the context from which the main results will emerge. Will economic competition matter most? Or the global struggle for power? Or collaboration to advance shared knowledge? Certainly, from a scientific and technological standpoint, some dimensions of AI progress will remain fundamentally important across the board. Language, for example, will remain central—along with translation and all the ways it connects to people’s physical experience.
Luca De Biase
Editorial Director
Journalist and writer, head of the innovation section at Il Sole 24 Ore. Professor of Knowledge Management at the University of Pisa. Recent books: Innovazione armonica, with Francesco Cicione (Rubettino, 2020), Il lavoro del futuro (Codice, 2018), Come saremo, with Telmo Pievani (Codice, 2016), Homo pluralis (Codice, 2015). Member of the Mission Assembly for Climate-Neutral and Smart Cities, at the European Commission. Co-founder of ItaliaStartup Association. Member of the scientific committee of Symbola, Civica and Pearson Academy. Until January 2021 he has chaired the "Working Group on the phenomenon of hate speech online", established by the Minister of Technological Innovation and Digitization, with the Ministry of Justice and the Department of Publishing at the Presidency of the Council. He has designed and managed La Vita Nòva, a pioneering bi-monthly review for tablets, that has won a Moebius Award, 2011, in Lugano, and a Lovie Award, 2011, in London. His work has been honored with the James W. Carey Award for Outstanding Media Ecology Journalism 2016, by the Media Ecology Association.
References
Partha Pratim Ray, “Physical AI: Bridging the Sim-to-Real Divide Toward Embodied, Ethical, and Autonomous Intelligence”, TechRxiv, November 05, 2025. DOI: 10.36227/techrxiv.176238257.73914361/v1
Vahid Salehi, “Fundamentals of Physical AI”, Journal of Intelligent System of Systems Lifecycle Management, November 12, 2025
https://arxiv.org/abs/2511.09497
Yang Liu, Member, IEEE, Weixing Chen, Yongjie Bai, Xiaodan Liang, Senior Member, IEEE, Guanbin Li, Member, IEEE, Wen Gao, Fellow, IEEE, Liang Lin, Fellow, IEEE, “Aligning Cyber Space with Physical World: A Comprehensive Survey on Embodied AI”, IEEE/ASME TRANSACTIONS ON MECHATRONICS, 25 Aug. 2025 https://arxiv.org/pdf/2407.06886v8


