Translated's Research Center

How AI Is Changing the Way We Face Climate Disasters

As part of the DVPS project, which investigates the future of AI through multimodal foundation models, we spoke with the VITO team about how AI is transforming disaster response and helping manage uncertainty in the era of climate crises.


Research

This interview is part of a broader editorial project by Imminent, featuring conversations with expert professionals collaborating on Physical AI — which begins when machines learn and interact with the real world in real time — within the DVPS project.

DVPS is among the most ambitious projects funded by the European Union in the field of artificial intelligence, backed by an initial investment of €29 million. It brings together 20 organizations from 9 countries to shape the next frontier of AI — one rooted in the interaction of machines with the real world. Building on the success of large language models, DVPS explores the future of AI through multimodal foundation models. Unlike current systems, which learn from representations of the world through text, images, and video, these next-generation models are designed to acquire real-time empirical knowledge via direct interaction with the physical world. By integrating linguistic, visual, and sensor data, they develop a deeper contextual awareness, enhancing human capabilities in situations where trust, precision, and adaptability are critical. The overall  initiative is led by Translated, which coordinates the project’s vision and implementation. The team brings together 70 of Europe’s leading AI scientists. The potential applications span across several domains, including language, healthcare and environment.

Vlaamse Instelling voor Technologisch Onderzoek NV (VITO), an independent research organization founded in Belgium, has joined the DVPS project. It is a leading European center focused on a regenerative economy, healthy living environments, and resilient ecosystems.

Dr. Tanja Van Achteren

Dr. Tanja Van Achteren

Team Lead for Applied Geo-AI & Embedded Solutions at VITO

Dr. Tanja Van Achteren is Team Lead for Applied Geo-AI & Embedded Solutions at VITO. With a Ph.D. in Electrotechnical Engineering from KU Leuven, she leads research at the crossroads of AI, remote sensing, and embedded imaging systems. Her team develops multimodal foundation models and self-supervised vision transformers for Earth observation, enabling smarter monitoring of land use, infrastructure, and the environment. She drives innovation in trustworthy, efficient geo-AI — translating cutting-edge research into operational impact for resilience, security, and sustainability.

Dr. Lisa Landuyt

Dr. Lisa Landuyt

Remote Sensing Scientist at VITO

Dr. Lisa Landuyt is a Remote Sensing Scientist at VITO, specializing in machine and deep learning for policy support, water management in particular. She holds a master’s in Bioscience Engineering and a PhD on flood mapping using SAR satellite imagery from Ghent University. Her research focuses on water as well as nature monitoring — from flood delineation and change detection to AI-driven tools that turn satellite imagery into actionable insights for environmental policy and resilience planning.

Andreas Luyts

Andreas Luyts

AI Scientist for Remote Sensing at VITO

Andreas Luyts is an AI Scientist for Remote Sensing at VITO. With master’s degrees in Theoretical Physics and Space Studies from KU Leuven, he focuses on developing foundation models and advanced AI methods for Earth observation. His work spans multimodal representation learning, satellite data compression, and automated land cover mapping. Before joining VITO, he contributed to research on computer vision for geospatial applications at ESA’s Φ-lab.

At VITO, our team works at the intersection of technology, industry, and end users. We focus on connecting advanced tools with real-world applications, ensuring that the solutions we develop are not only innovative but also practical and impactful for the people and communities who need them.

Our team began exploring AI around 2015, when the potential of deep learning and related technologies started gaining momentum. Since then, the team has grown to nearly a dozen experts, bringing together a multidisciplinary mix of skills. This multidisciplinary approach allows us to turn data into actionable insights for the people who rely on it.

This capability is crucial when it comes to crises. We know that the number of natural hazards we’re facing is increasing and will continue to increase, both in frequency and intensity. But what we have consistently observed is that during disasters there is often a critical gap: people on the ground lack the information they need to respond effectively. 

In this sense, providing people with the right information is crucial. While we cannot prevent these events from occurring, we can transform how we respond to them. That’s what drives our work: minimizing the impact on people, economies, and nature by ensuring that timely, accurate information reaches those who need it most. By combining multidisciplinary expertise, close collaboration with end users, and advanced AI tools, the VITO team aims to make disaster responses faster, more informed, and ultimately more effective—helping the world thrive even amid a growing climate crisis.

AI offers incredible potential, but it also brings some serious challenges. One key issue is transferability and generalizability. You can train an excellent model on data from a specific region in Belgium—but what happens if a disaster, say a flood, hits a completely different area? Will the model still perform well? Will it provide useful insights? That’s hard to predict in advance.

Then there’s the interpretability problem—the classic “black box” dilemma with AI. You might get accurate answers, but you don’t always know how the system arrived at them. In crisis management, that lack of transparency can be critical. If you don’t understand why a model produces certain results, it’s hard to know whether you can truly trust the information it gives you.

This is where multimodality can make a real difference. During a flood, for example, every piece of data matters. Every snapshot of the affected area helps build a clearer picture—where is the flooding most severe? Are buildings damaged? What areas are still accessible?

Most current remote sensing research still relies on a single sensor—one satellite, one aerial image, one drone capture. But in real-world emergencies, you don’t want to depend on just one data source. There’s a wealth of other information available—news articles, social media updates, textual and vector data—and by integrating all these different sources through multimodal models, we can start making sense of the situation immediately. That’s a major step forward.

Right now, emergency responders still carry much of the burden. They’re the ones who need to gather all these puzzle pieces, analyze them, and draw conclusions. The hope is that multimodal AI can help lighten that load—speeding up their work and supporting faster, more informed decisions. 

Still, many challenges remain. For example, in disaster management, information rarely arrives all at once. While drone imagery might come in quickly, satellite imagery often takes a longer time for acquisition as well as downlinking and pre-processing. Social media posts often appear first, but can be unreliable. The question is: how do we handle this asynchronous flow of data—all these sources arriving at different times—while trying to build a foundation model capable of integrating and interpreting them all effectively?

In Belgium, for instance, existing tools like Waterinfo.be and Paragon already play an important role in crisis management. Waterinfo.be focuses on water management, enabling coordination among water managers and providing real-time water levels and flood forecasts to help anticipate critical situations. Paragon, on the other hand, connects various stakeholders—from politicians to first responders—through chat and mapping interfaces. However, both remain limited in scope: they don’t fully integrate new, dynamic information such as remote sensing updates or allow for easy interaction with complex data.

Looking ahead, disaster management tools should be far more interactive and adaptive. Users should be able to query the system directly and receive clear, concise summaries, while the tools themselves must integrate diverse data streams—social media, news, satellite imagery, drone data—and tailor outputs to different stakeholders. 

For example, take firefighters: they don’t have time to stop and look at a map on their phone. They need real-time audio guidance in their headset, describing their surroundings and directing them toward people who need help. Meanwhile, a politician might require short, media-ready updates. Each role demands information in a different form, and the tools of the future must adapt accordingly. By combining these data sources with language foundation models, we aim to automatically tailor information to each user’s context, making the system more accessible and reducing the need for constant expert interpretation.

However, since information is democratized and not always mediated by experts, managing liability is essential. For now, from what we observe, AI’s trustworthiness is still lower than that of humans. In fact, AI, at least for now, still has drawbacks and limits, with uncertainties present both in the input data and in the models, so there should always be some kind of human in the loop.

In disaster management there are different types of decisions to make, but decision-making that impacts human lives should not be made by impersonal algorithms. In this sense, human oversight remains non-negotiable. 

AI supports human decision-makers by extracting and organizing information from diverse technical sources, efficiently guiding stakeholders in the field. It can also manage sensitive data, such as drone imagery over private areas, by ensuring that only the information that can be shared safely reaches specific stakeholders—a concept we’ve also explored in other DVPS domains, like federated learning in cardiology. Ultimately, the most effective system combines human expertise with AI assistance: AI efficiently gathers, processes, and summarizes information, while humans interpret, validate, and make critical decisions. This partnership enables faster, more informed disaster responses while respecting ethical, legal, and practical constraints.

Quantifying uncertainty is a fundamental part of our work, because if people don’t trust the data or the predictions, they won’t rely on them to make effective decisions. Trust depends on how clearly we communicate the limits of our models. 

Uncertainty exists at multiple levels—in the data, in the models, and in how results are interpreted. At the data level, for example, a social media report is far less reliable than a direct drone observation, while a satellite image can be extremely valuable but sometimes difficult to interpret. 

This is where multimodality again becomes a key advantage. Having access to multiple types of data allows us to cross-check information and verify its consistency. A social media post, for instance, can be validated using satellite or aerial observations. Each source has its own uncertainty, but by combining them—and managing them probabilistically—we can build a more trustworthy picture of what’s happening on the ground.

On the modeling side, one way to handle uncertainty is through hybrid systems that integrate physical parameters within AI models. Physical models are easier to understand because each parameter represents a measurable aspect of reality, and that makes it possible to quantify how uncertainty in those parameters affects the outputs. AI models, on the other hand, often contain parameters that are not directly interpretable, which makes uncertainty quantification much harder.

To bridge this gap, we work with physics-informed machine learning models, which incorporate aspects of physical behavior into the system. This helps us better understand where uncertainties in predictions come from. Uncertainty can also be analyzed at the component level: different encoders or sub-models contribute their own margins of error, which can then be aggregated into a global uncertainty estimate.

Estimating uncertainty isn’t enough; it must be communicated clearly. A simple accuracy number—say, “70%”—means little in the field. It’s more useful to translate it into practical terms: “These are the high-risk areas; this neighborhood is likely safe,” or when the model is uncertain: “We’re not sure about this part; it’s worth checking directly.”

Clear communication helps responders set realistic priorities and builds trust. This not only applies to emergency services but also to the wider public: if something is highly uncertain, it must be stated. The goal isn’t to eliminate uncertainty, but to manage it so decisions remain informed, responsible, and grounded in reality.

In this sense, AI is not a truth-sayer. It can extract and organize information, quantify uncertainties, and support human decision-making—but humans remain central, responsible for interpreting and validating the results.

Climate change is forcing communities to face more frequent and intense disasters, putting pressure on cultural practices and social structures. How people respond varies widely depending on culture—how they trust official warnings or rely on community networks—and local knowledge and social cohesion play a crucial role. From a technical standpoint—and let’s be clear, we’re tech people, not social scientists—we don’t just optimize for accuracy. That’s only a quantitative metric. Equally important is making information accessible and understandable across different languages, contexts, and cultures.

In this sense, multilingualism can certainly help scale our work. We don’t even have to go outside of Europe to face different cultures: within Belgium alone, we have three official languages. And with each language comes not only a way of speaking but also distinct forms of politeness, intonation, and expression.

In disaster management, specifically, this becomes even more complex because each community also has its own jargon. So it’s not just a matter of translating words—it’s really about reaching the right audience in the right way. For now, we often use English, but we could go much further if we were able to address people in their mother tongue, or even in dialects that are not widely spoken or digitally available. That’s definitely where multilingual models can make a difference, helping us reach people who might otherwise miss crucial information.

A small but telling example is this untranslatable concept. In French, you can say “crue,” which means the rising of the water within the river, and “inondation,” which refers to the actual flooding—when the river overflows its natural bounds. But in Dutch or English, there isn’t really a term for that “rising water phase.” When we first interacted with French-speaking stakeholders, we found it quite confusing. We even had to ask a French-speaking scientist to explain it to us because Google Translate didn’t capture the nuance. These subtle linguistic differences can really complicate communication and collaboration.

Having multilingual and transferable models could really help prevent such confusion—ensuring that clear, consistent messages reach everyone, in their own language and within their own understanding of what needs to be done.

That said, it’s important to note that this hasn’t been our main research focus so far. Multilingualism and cross-cultural communication are angles we haven’t explored before. That’s why we’re excited that Translated is coordinating the DVPS project. The question for us is: how can we integrate multilingual capabilities and non-verbal communication insights into disaster management tools? It’s new territory, and it’s fascinating.

Through a multidisciplinary approach and close collaboration with stakeholders, we operate at the intersection of innovation and reality, fully engaging with the people who have needs and will use what we create. We don’t work just for society, but with society—and even as we dream big, every step we take is grounded, realistic, and aimed at meaningful impact.