Research
This interview is part of a broader editorial project by Imminent, featuring conversations with expert professionals collaborating on Physical AI — which begins when machines learn and interact with the real world in real time — within the DVPS project.
DVPS is among the most ambitious projects funded by the European Union in the field of artificial intelligence, backed by an initial investment of €29 million. It brings together 20 organizations from 9 countries to shape the next frontier of AI — one rooted in the interaction of machines with the real world. Building on the success of large language models, DVPS explores the future of AI through multimodal foundation models. Unlike current systems, which learn from representations of the world through text, images, and video, these next-generation models are designed to acquire real-time empirical knowledge via direct interaction with the physical world. By integrating linguistic, visual, and sensor data, they develop a deeper contextual awareness, enhancing human capabilities in situations where trust, precision, and adaptability are critical. The overall initiative is led by Translated, which coordinates the project’s vision and implementation. The team brings together 70 of Europe’s leading AI scientists. The potential applications span across several domains, including language, healthcare and environment.
Prof. Dr. Folkert W. Asselbergs has joined the DVPS project, representing the University Medical Center Utrecht, Netherlands and working on healthcare applications in the field of cardiology.

Prof. Dr. Folkert W. Asselbergs
He's a consultant cardiologist, Prof of Precision medicine at the Institute of Health Informatics, University College London, Chair Digital Cardiology and AI committee European Society of Cardiology, Chair of Amsterdam Heart Center, Chair Data Infrastructure Dutch Cardiovascular Alliance, and associate editor of the European Heart Journal for the section digital innovation. His research program focuses on precision medicine using real-world big data from electronic health records, national registries and large population-based cohort studies. He published over 635 papers and is editor of the textbook "Clinical Applications of Artificial Intelligence in Real-World Data."
Your theoretical framework is Translational Data Science, which turns data into actionable knowledge. How did you get into this field, and what clinical needs inspired your action-oriented approach?
It’s important to emphasize that I’m a clinician and during my training, I saw how data was becoming more and more important. Today, I’m a cardiologist and Chair of the Amsterdam Heart Center, Professor of Translational Data Science at UMC Utrecht, Professor of Precision Medicine at University College London, and Chair of the Digital Cardiology and AI Committee of the European Society of Cardiology.
How I got here goes back to my PhD years. At that time, genetics was emerging fast. Genetic and DNA information was becoming the first step in personalized medicine. We saw that patients with certain genetic profiles responded differently to medication or had different risk levels. That was the starting point of my journey, and my first chair was in cardiovascular genetics.
But I soon realized that even people with the same genetic makeup can have very different risks. Genetics interacts with other factors — environmental conditions like climate, lifestyle choices such as smoking, and medication use. This led me to precision medicine.
Then I understood that for all this multi-dimensional data, we need new kinds of analytics, including AI. And still, something was missing: the context of the person. As a doctor, I instinctively read multiple variables when I see a patient walk into my office — but none of that is recorded. Now, AI makes it possible to analyze new types of data: speech, emotions, non-verbal communication, and wearable data on daily activity.
So I moved from a single-model approach (genetics) to multi-model approaches, and now — thanks to computational power and AI capabilities — towards integrating entirely new data. We’ve seen an enormous rise in foundational models and methods to analyze this data.
But there’s still a ‘valley of death’ between what’s possible in research and what’s actually implemented in clinical practice. My work in translational data science is about bridging that gap — taking the tools we already have and making sure they are used in daily clinical care, to improve real patients’ lives.
What’s the most significant impact you’ve seen so far in your work?
The biggest one — and I think many would agree — is COVID-19.
At the start, we didn’t really know what the disease was. Early data from China suggested that people with COVID might have an increased risk of cardiovascular disease. So, we acted quickly. Within one week, we launched a study across Europe — and beyond — involving partners in Russia, Iran, the US, and other countries across all continents. The goal: collect data on COVID-19 patients and cardiovascular disease.
People with cardiovascular disease were understandably anxious. They wondered: Do I have a higher risk? And we didn’t know if COVID-19 itself increased the risk of developing heart problems. I even had patients whose partners slept in a different room because they were afraid of infecting each other. Many essentially locked themselves away because we just didn’t know the cardiovascular impact.

In a short time, we gathered structured data from 18 countries and 7,000 hospitalized patients. That really showed me the power of data — but it was still mostly manual, structured information, focused on clinical endpoints like death or hospitalization.
What we want now is richer data: information on quality of life, symptom tracking, and long-term effects. For example, many people with long COVID experience fatigue and other persistent symptoms — but these are hard to capture with our current datasets.
So, COVID-19 taught me two things: the importance of data, and the importance of sharing them. By sharing, we make sure the whole community can benefit from what’s collected.
Over the years, however, your research has increasingly focused on digital twins. From your perspective, how would you define a digital twin?
That’s a difficult question, because everyone has a slightly different definition.
For me, a digital twin is essentially a simulation of yourself. The idea is to collect data about you — wearable data, electronic patient records, other personal information — and match it against large reference datasets to find your “nearest neighbor”: someone most like you. That “twin” can tell you something about your prognosis, diagnosis, and risks.
In my vision, in the future you’ll have an app where you upload your data, and it will give you personalized predictions: your risk of certain cancers, your cardiovascular risk, and so on. But more importantly, digital twins allow you to simulate interventions in silico. You could ask: What if I start exercising? Change my diet? Stop smoking? Lose weight? Start a certain medication or even gene therapy? You can see how your health trajectory might change — and then have an informed discussion with your healthcare provider about the best intervention.
Right now, we often treat patients in a one-size-fits-all way. For example, a younger woman and I might both have hypertension or heart failure, and we’d get the same treatment — even though our risks and responses may be very different. Digital twinning is about personalizing risk models instead of applying the same risk factors to everyone.
DIGITAL TWINS
A digital twin is a live, computational mirror of a real system. The replica system is designed to stay synchronized with its physical, real world counterpart to explain the current state of being and try to predict what could happen on the basis of its customized modeling. The digital twin is made capable of ingesting streams of information from diverse sources, such as sensors, logs and human input. Post cleaning and aligning the ingested information, the digital twin fuses it with models ranging from physics-based simulators to machine learning algorithms. Through this process, the twin maintains an up-to-date picture of the state of the real-world system and the uncertainties involved. It then runs “what-if” and counterfactual scenarios to help optimize the design, operations, maintenance, risk and resilience of the real-world system. Being dependent on the quality of data, they can turn raw data into foresight and control. Digital twins are now widely used across industries to design faster and operate with fewer surprises, such as in aerospace, automotive, manufacturing, processing, energy, infrastructure, logistics, telecom, life sciences and healthcare. In the domain of life sciences and healthcare, examples of the applications of digital twins include creating virtual patients to explore targets and drug interactions in clinical R&D, acting as “soft sensors” for biological variables hard to measure in real time, patient-specific physiological twins to plan implants, estimate risk, and personalize therapy. These applications end up virtualizing costly and risky steps, personalizing care, and making medical operations more resilient, all the while demanding rigorous validation against real medical outcomes to earn the trust.
And it’s not just about patients — it’s about people. The real goal is to assess lifetime risk and act early. If someone already has the disease, it’s too late for prevention. A digital twin can show not just your risk, but also how you can change it, and what your future might look like if you act.
Of course, to make this work, you need a chatbot or avatar people can trust — a kind of virtual coach or doctor. This coach must have enough information to be accurate, and avoid both false negatives and especially false positives. False positives create unnecessary anxiety and drive up healthcare costs. Doctors already tailor how they explain risk depending on the patient. If you have chest pain but your risk is minimal, I’ll say so. If I have chest pain and I’m older, maybe the risk is higher and I’ll be advised to get an ECG. The recommendation needs to be customized.
That’s why I’m excited about projects like DVPS and working with Translated. In medicine today, “chest pain” is treated as flat text in AI models — but context matters. There’s an emotional context, cultural differences, age differences — all of which change the meaning and weight of those same words. I want systems that recognize those nuances, because that’s when you can truly personalize care.
And throughout all this, trustworthiness is key. People need to believe in their digital twin — and in the advice it gives.
How are these digital twins built? And how do you translate different kinds of inputs into a single, accurate and understandable model?
They’re built from both material and immaterial parameters.
Material parameters are things you can measure objectively — your DNA, blood pressure, lab values, medication use. These are structured, quantitative data points that can be standardized across languages and healthcare systems.
Immaterial parameters are more about your lifestyle, cultural background, and socioeconomic context. These are harder to quantify, but they’re just as important for understanding health outcomes. They’re like totally different languages.
Right now, most digital twins — including the ones we’re developing — are still built mainly on structured data: lab values, ECG results, blood pressure, smoking status, weight, DNA. Those are relatively easy to standardize.
What’s much harder to standardize is free text — symptoms, for example. This is where DVPS comes in. At the moment, we simply don’t have enough of this kind of data. We don’t record patient conversations in the clinic. That means we can’t yet build models that fully understand the nuance of symptoms described in natural language.
DVPS can change that by capturing and processing this unstructured, symptom-level information — giving us a richer, more complete foundation for digital twins.
How can digital twins personalize medical guidance, foster trust, and bridge the gap between doctors and patients?
Starting with the last point — I believe this technology can democratize not just data, but knowledge.
Right now, when you come to me as a patient, there’s a gap. I’ve studied medicine for 16 years, and you trust me because of that expertise. Whether that trust is always valid, I hope so — but it’s not guaranteed.
With a digital twin, if you know your risks and treatment options, we can connect your health data directly to the relevant professional clinical guidelines and medical literature. That means you can access the same information I have in my head. For example, if your cholesterol is high, your digital twin could consult the guidelines using a large language model and say: “Your value is above the recommended threshold, and the guideline suggests this treatment.” You can then go to your doctor with that information. The doctor might say, “No, in your case it’s different,” or they might agree. But the dynamic changes — you’re now part of an informed, equal conversation.

In my vision the model has to adapt this information to the patient’s educational level, cultural background, and language. This makes the information more relatable and builds trust. If you can recognize yourself — or see a trusted figure — in that avatar, you’re more likely to engage with it. We could even create avatars that look and sound like your own GP. In small tests we ran in London, patients felt comfortable talking to avatars and trusted the answers they got.
And once you recognize yourself, a new personal vocabulary for health comes up. That can boost awareness, confidence, and — we hope — adherence to treatment. If people truly understand what’s happening in their bodies, they’re more likely to follow through.
But this could also transform the structure of healthcare itself. Today, it’s doctor-centered and hospital-based. With this technology, it could become people-centered and app-based. Patients could arrive with their own data, choose their physicians, and stay in control. Right now, I hold your MRIs, your echoes, your test results — so in reality, you’re dependent on me. In the future, you could hold that data yourself.
That shift will push healthcare into the big tech domain. And the role of the doctor will have to change — because if we don’t adapt, parts of our work could be replaced.
In technology, we often hear about “universal design.” Do you believe there could be a kind of “universal language” for these digital twins — or should we embrace linguistic and cultural diversity when building AI models for healthcare?
What you often see now is that people train a model in English, and then try to transfer it to other languages. But I believe we should focus on multilingualism — because every culture is different. Yes, we can learn from each other, and there should be a foundational model that works across languages, a kind of universal core. But it still needs to be fine-tuned on local data.
There are important local differences: in Japan, some villages have unusually high numbers of people reaching 100; in Italy, the blue zones show similar longevity; in Peru, altitude is a major health factor. These variations matter.
If we don’t fine-tune models for local languages and contexts, we risk bias and exclusion. Health is also perceived differently: an Italian’s sense of well-being may not match that of a Dutch person. Northern and Southern Europe aren’t the same, right? Humanity isn’t universal. Of course, we share some universal values — family, peace — but there are also many differences. And those differences are worth embracing, because they’re what make us human.
Speaking of which, in medicine language is supposed to be neutral — but what does neutrality actually mean, especially when models are often trained on data that reflect cultural biases? If the AI begins to “speak” directly to patients, who decides how it speaks? Who controls the message — and who is ultimately responsible for what is said?
We have to be aware that the data we use is already biased — because most of it comes from the Western world. That’s why we need to enrich it with data from other countries. In Another Horizon Project on AI for Heart Failure, for example, we included Tanzania and Peru to make the dataset more balanced. That balance is our responsibility. And if the data isn’t balanced, we must be transparent about it.

I think we should have something like an AI passport — a clear disclaimer that explains the limitations of the dataset: where the data came from, what populations it represents, and what’s missing. If a model was trained only on Icelandic data, we have to make sure people know that, and that they understand the characteristics of that population. There’s a clear societal responsibility here.
But the harder part is liability — especially when AI starts working independently or even autonomously. If AI is replacing some human decisions, where should the human be in the loop? At the start, or at the end? If it’s at the start, we’re not really replacing work, just adding more steps. But the goal is to increase access to care and equity — not replace doctors entirely, but free up capacity so everyone can have access to healthcare.
I think we’ll need an AI agent that continuously audits the system. If something irregular comes up, it escalates to a human. And it has to be explainable. Patients should be able to see why the AI came to a certain conclusion — the reasoning, the triggers. That way, if they have doubts, they can go to a doctor with that transcript and ask, “Do you agree?”
So, for me, transparency and explainability are key. Thus, we also need an ethical framework — not just in Europe but globally — that defines what we mean by trustworthiness in AI, and how we make sure we can rely on it.
If you had to explain in one sentence — to someone completely outside your field — what makes your work impactful, what would you say?
I would say that it promotes the democratization of knowledge by giving people trustworthy, personalized insights about their own health, so they can understand their prognosis, explore treatment options, and make informed decisions based on information tailored to who they are.