Translated's Research Center

Conference Corner: EurIPS 2025

Research

In our Conference Corner, readers will find Imminent’s take on the most important conferences in language research. Each edition highlights the most interesting talks, notable papers, and emerging trends presented at these events. Whether you’re exploring advances in linguistics, NLP, or broader language sciences, our curated summaries provide a clear and engaging snapshot of the ideas and innovations shaping the field.

Why EurIPS?

EurIPS 2025, the first European satellite event of flagship event NeurIPS held in Copenhagen, provided a unique strategic advantage. By concentrating around 2000 members of the European AI talent pool and featuring a selective program (38 Orals, 241 posters), it offered high-density networking and access to scientific advancement in AI within an EU context.

Emerging Trends

At EurIPS 2025, Sustainable AI was a central theme, focusing on efficiency, adaptability, and environmental responsibility. Emtiyaz Khan’s talk, “Adaptive Bayesian Intelligence and the Road to Sustainable AI” introduced continual learning methods that reduce the need to retrain models from scratch, while Sepp Hochreiter’s “Sustainable, Low-Energy, and Fast AI Made in Europe” presented xLSTM, an efficient alternative to Transformers that lowers energy use, speeds inference, and handles longer contexts for practical, low-energy AI deployment.

Highlights from Talks and Papers


Some papers and talks The Art of (Artificial) Reasoning, by Yejin Choi who argues that it is possible to democratize generative AI transcending current Scaling Laws, which implies the only path is extreme scaling of resources, by innovating with unconventional data, algorithms and collaboration. She collaborated on her position presenting ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models demonstrating it is possible to push reasoning capabilities of small models with careful RL. For data she presents Prismatic Synthesis: Gradient-based Data Diversification Boosts Generalization in LLM Reasoning demonstrating is possible to scale performance scaling synthetic data generation, and as example of extreme collaboration she present OpenThoughts: Data Recipes for Reasoning Models, a dataset created coordinating many universities, companies and startups.

Some interesting talk and posters, focusing on techniques directly applicable in language processes: 

Inference-Time Hyper-Scaling with KV Cache Compression – boosts reasoning accuracy by compressing Transformer’s KV caches to allow more token generation within the same memory footprint. The proposed method, named Dynamic Memory Sparsification (DMS), sparsifies KV caches achieving up to 8x compression by teaching pre-trained models which tokens can be scheduled for future eviction.

Beyond Oracle: Verifier-Supervision for Instruction Hierarchy in Reasoning and Instruction-Tuned LLMs (paper) – introduces a unified framework that improves instruction hierarchy in LLMs (e.g: System vs User) by utilizing programmatically verifiable signals instead of costly oracle labels. The method introduces a synthesis pipeline to create conflicting instruction pairs and executable verifiers (i.e: python functions), ensuring dataset quality through an automated unit testing and repair. Using RL with verifiable rewards with this dataset significantly enhances adherence to complex directives and safety robustness.

Hogwild! Inference: Parallel LLM Generation via Concurrent Attention (paper) – The research introduces a parallel generation method that allows multiple LLM instances to collaborate via a shared, concurrently-updated KV cache. The system enables workers to decide a shared strategy and synchronize seeing each other’s memory (KV entries) without requiring additional fine-tuning or recomputation. This new promising approach enables effective and efficient collaboration between multiple LLMs boosting accuracy on reasoning tasks.

Conference Main Themes

Beyond the central theme of Sustainable AI, the conference maintained a strong balance across a broad range of topics. The program featured diverse research areas including Model Optimization and Representation Learning, as well as specialized fields such as Computer Vision, Diffusion Models, Graph Neural Networks, Causal Inference, and Reinforcement Learning.

New Resources

EurIPS 2025 highlighted key open-source resources, including TabArena LivingBenchmark – a living benchmark for ML on Tabular Data, advocating for evolving evaluation benchmarks over static ones. Although applied to tabular data, the specific challenges addressed in the paper are likely applicable to other domains as well.