Translated's Research Center

Imminent Science Spotlight

Curated monthly by our team of forward-thinking researchers, the latest in academic insights on language and technology: a deep dive into ideas on the brink of change about large language models (LLM), machine translation (MT), text-to-speech, and more.

September 2025

July 2025

June 2025

REL-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance

As LLMs become integral to human-AI interactions, traditional evaluations of model uncertainty—focused on verbal or numerical calibration—fail to capture a critical aspect: how humans respond to model outputs. In REL-A.I. (pronounced “rely”), the authors propose an interaction-centered framework that directly measures human reliance on LLMs. Through controlled studies, they show that contextual factors—such as the domain of the question, the model’s typical tone of confidence, and even polite greetings like “I’m happy to help!”—can significantly alter user behavior, increasing reliance by up to 30%. Their findings thus reveal that seemingly well-calibrated models can still induce risky human behavior, underscoring the need to evaluate LLMs not just by what they say, but by how their outputs shape user decisions.
Read the full paper here

May 2025

The field of Symbolic Regression (SR) entails discovering the underlying math equations governing distributions of data. Classical techniques involve large iterations to search across a vast combinatorial space, but without much scientific motivation. The authors, taking cognizance of the scientific understanding and code generation abilities in general-purpose LLMs, propose a more efficient method for equation discovery.  Representing equation structures as Python programs, they prompt a guided LLM to produce such equation proposals. The proposals are mathematically optimized in search of convergence – the best ones being prompted back to the LLM in an iterative approach. They demonstrate much quicker and better convergence through this LLM-guided strategy across three data domains.
Read the full paper here

April 2025

March 2025

To specialize a general-purpose language model on a target domain, extending the model’s pre-training through adaptation on a data mixture having domain-specific data can be a crucial step. For this stage, it is important to determine an optimal mixture ratio (of the adaptation data in the training data) under a fixed compute budget. To avoid incurring expensive searches for the practically optimal data mixture ratio, the authors devise a scaling law to predict the resultant validation loss as a function of the model size, training data size, and the mixture ratio. They also devise a way to extend the scaling law for cross-domain adaptation, which entails adaptation on one domain followed by inferencing on a different domain. They demonstrate the effectiveness of the law in predicting the validation loss trends for a model of a given size.
Read the full paper here

February 2025

January 2025

December 2024

November 2024

October 2024

September 2024

August 2024

July 2024

June 2024

More from Imminent

Imminent Research Grants

Imminent Research Grants

$100,000 to fund language technology innovators

Imminent was founded to help innovators who share the goal of making it easier for everyone living in our multilingual world to understand and be understood by all others. Each year, Imminent allocate $100,000 to fund five original research projects to explore the most advanced frontiers in the world of language services. Topics: Language economics – Linguistic data – Machine learning algorithms for translation – Human-computer interaction – The neuroscience of language.

Apply now

AI in Context

AI in Context

Imminent Readings

Eager to know more? Here, you will find a selection of articles from top newspapers, research publications, and leading magazines from around the world, exploring AI’s impact on language, culture, geopolitics, and economies.

Dive deeper

Imminent Research Reports

Imminent Research Reports

A journey through localization, technology, language, and research.

An essential resource for leaders and a powerful tool for gaining deeper knowledge and understanding of the multicultural world we live in.

Get your copy now