Translated's Research Center

Large Language Thoughts — 2025

Leading experts—Andrew Ng, Dario Amodei, Fei-Fei Li, and Stuart Russell—offer their insights on the latest advancements shaping the future of AI.

Technology

The rise of large-scale artificial intelligence drives us to confront one of the defining questions of our time: how can AI elevate the human experience rather than undermine it? As AI increasingly reshapes how we communicate, learn, govern, and imagine our future, the stakes are immense. This year, we turn to cultural leaders whose insights illuminate AI’s deep entanglement with human society and guide us toward building technologies that merge advanced intelligence with the qualities that make us human. From foundational concerns to applied advances, a deeper conversation is emerging that goes beyond power and performance to examine the future we are building. Leading voices in AI are converging on a clearer understanding of what this technology is and how it will influence the world around us. 

Dario Amodei urges us to see AI not as an engineered machine but as a “grown” system to be cultivated with care, grounded in safety and long-term values. His perspective echoes Stuart Russell’s concern that, while scaling up models may improve performance, it does little to deepen our understanding of how these systems function, potentially leading to unpredictable outcomes and fragile, unsustainable trajectories. From an applied perspective, Andrew Ng highlights agentic systems that think, reflect, and collaborate, aiming to extend human capabilities rather than replace them. Bringing these insights into a broader frame, Fei-Fei Li reminds us that AI is not only a technical force but also a cultural one. As machines learn to see, speak, and respond in increasingly human ways, they challenge our ideas of truth, trust, and meaning.


Together, these voices remind us that AI’s future isn’t just a technical challenge — it’s a human one, shaped by those who design, deploy, and are ultimately affected by it. The message is not one of fear but of responsibility. If AI is a mirror, it reflects not only what we know but who we are and who we wish to become. Artificial intelligence is not just a tool or a challenge; it is an evolutionary threshold. How we meet it will define the next chapter of human innovation. 



Dario Amodei

Dario Amodei

CEO and cofounder of Anthropic

Dario Amodei is an American artificial intelligence researcher and entrepreneur. He is the co-founder and CEO of Anthropic, the company behind the large language model series Claude AI. He was previously the vice president of research at OpenAI.

Editor’s Note on the Interview with Amodei: “The Future of U.S. AI Leadership” by the Council on Foreign Relations

Dario Amodei, CEO and cofounder of Anthropic, previously led research at OpenAI and Google Brain, helping develop GPT-2 and GPT-3. He left OpenAI in late 2020 to found Anthropic as  a mission-driven, public-benefit corporation. He and his cofounders believed that “scaling laws” — the idea that simply increasing computation and data greatly improves AI capabilities — would lead to powerful and unpredictable AI systems, requiring strict safety measures. 
Anthropic — a mission-driven public-benefit corporation — was founded on the belief that “scaling laws,” the idea that simply increasing computation and data significantly enhances AI capabilities, would eventually lead to powerful and unpredictable systems. This conviction led to a strong focus on safety, including major investments in “mechanistic interpretability”: a field that explores a model’s internal workings to help developers understand why it behaves the way it does. Company’s founders also pioneered “constitutional AI,” a method of training models based on explicit principles, and introduced a “responsible scaling policy” inspired by biosafety levels.
Their goal: ensure that as models grow more capable, Anthropic implements rigorous controls to prevent misuse particularly in bioweapons, cyberattacks, or other harmful applications. Amodei outlines AI’s extraordinary economic and societal potential. He expects progress in areas like biology and healthcare, with AI accelerating breakthroughs against complex diseases such as cancer and Alzheimer’s. However, he also fears significant job displacement as AI systems eventually learn to perform most cognitive tasks. Amodei suggests society will need new frameworks to preserve human meaning, noting that economic productivity alone should not define people’s worth.
National security is a key concern. Export controls on advanced chips, he argues, are essential to prevent adversaries from building near equivalent AI. While smaller models can be trained cheaply, frontier models require massive computing power, making hardware access critical. Amodei supports U.S. government testing of AI systems for dangerous capabilities, improved industrial security to prevent espionage, and a stable energy supply so large datacenters can be built domestically or in allied nations.


Society will need new frameworks to preserve human meaning, noting that economic productivity alone should not define people’s worth.


He sees only partial scope for U.S.-China cooperation on AI due to intense economic and military competition. Still, areas like preventing autonomous AI weaponization might be worth a limited dialogue. Amodei also hints at the importance of investigating AI “experience” or “sentience,” suggesting we should be prepared to handle unexpected ethical dilemmas if future systems display human-like cognition. Asked what remains uniquely human in an AI-dominated world, Amodei highlights relationships, moral obligations to others, and the drive for achievement none of which he believes are diminished simply because a machine can surpass human intellect.

In His Own Words:


Andrew Ng

Andrew Ng

Founder and executive chairman of Landing AI

Dr. Andrew Ng is a globally recognized leader in AI (Artificial Intelligence). He is Founder of DeepLearning.AI, Executive Chairman of LandingAI, General Partner at AI Fund, Chairman & Co-Founder of Coursera and an Adjunct Professor at Stanford University’s Computer Science Department. As a pioneer in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, and has authored or co-authored over 200 research papers in machine learning, robotics and related fields. In 2023, he was named to the Time100 AI list of the most influential AI persons in the world.

Key Takeaways from the Microsoft Build 2024 Keynote: “The Rise of AI Agents and Agents Reasoning”

Where are the biggest opportunities in AI right now? While much of the focus is on new models and infrastructure, the real value comes from building applications that use AI.

• AI is like electricity: a general-purpose technology with endless uses.
• Most attention is on models and infrastructure, but applications are where the most value will be created.

• In the past, building an AI system could take 6–12 months.
• Generative AI now allows teams to build and test prototypes in days.
• This leads to faster experimentation and quicker product development.

• Ng’s top trend to watch is agentic AI-AI systems that can plan, reason, revise, and act step-by-step.
• Instead of doing a task all at once, agentic AI works in loops: planning, testing, and improving.

• Reflection – AI reviews and improves its own work.
Tool Use – AI calls on tools or APIs when needed.
Planning – AI breaks down complex tasks into steps. 
• Multi-agent Collaboration – AI takes on multiple roles to complete a task more effectively.

•  Ng showcased demos where AI agents analyze images and videos.
•  Examples include counting players on a field, identifying goals in soccer videos, and describing clips.
•  These tools can generate code and metadata, helping companies make use of large collections of visual data.

•  Faster text generation (thanks to hardware/software advances).
•  AI models tuned for tool use, not just answering questions.
•  Growing importance of handling unstructured data (like images and video).
•  Visual AI is still early, but it’s starting to unlock a lot of value.

It’s a great time to build with AI. Generative and agentic AI are making development faster and easier, opening up new possibilities that didn’t exist even a year ago.

In His Own Words:


Stuart Russell

Stuart Russell

Professor at University of California, Berkeley

Stuart Russell received his B.A. with first-class honours in physics from Oxford University and his Ph.D. in computer science from Stanford. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences. He is co-chair of the World Economic Forum Council on AI and the OECD Expert Group on AI Futures. His research covers a wide range of topics in AI including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations.

Editor’s Note on Russell’s Speech at World Knowledge Forum: “The Ethics of AI”

The long-standing goal of AI-creating machines more intelligent than humans raises a fundamental question: “What if we succeed?” Achieving true Artificial General Intelligence (AGI) would be a civilization-scale event, rivaling humanity’s greatest milestones. Yet the field, from its 1940s beginnings, seldom asked what that success would mean for humankind. 
They argue that current AI systems, especially large neural networks (“giant black boxes”), don’t really match this AGI goal. Deep learning relies on trillions of adjustable parameters and enormous training sets-akin to “breeding ever-bigger birds” rather than building a plane from known aerodynamic principles. Despite striking advances (e.g., machine translation, AlphaFold for protein structures, massively faster simulations, generative design, and the famous AlphaGo victory), major gaps remain. For example, self-driving cars are still error-prone after decades of work, large language models fail at basic arithmetic, and the superhuman Go programs were recently shown to have exploitable blind spots. 



Looking ahead, some researchers believe merely scaling these systems another hundredfold could lead to genuine AGI by 2027 or soon thereafter. Investments are colossal, dwarfing the Manhattan Project or the Large Hadron Collider budgets. But it is unclear whether bigger models alone will yield real understanding and mastery of intelligence or if the entire effort might collapse in a spectacular “bubble burst,” bigger than any previous “AI winter.” 
If genuine AGI does emerge, the potential upside is massive. A superintelligent AI could, in principle, replicate and expand the best of human civilization on an unprecedented scale, possibly increasing global GDP tenfold or more. Yet there is also an extreme downside: human extinction or a world where humans are “infantilized,” reduced to passive dependents while superintelligent machines run everything. Is that what we want? In the speaker’s view, this is not a niche ethical question but common sense: we must ensure humanity’s survival and autonomy if we create entities more powerful than we are. 
Russell proposes that any AI system’s sole objective should be to further human “preferences,” meaning all the things we care about now and in the future. Crucially, the AI must acknowledge it does not fully know our preferences in advance and must learn them in a provably safe manner.

This approach faces deep philosophical dilemmas: people’s preferences can be manipulated by oppressive social structures, and disagreements between billions of individuals are inevitable. Nevertheless, some framework for preference aggregation (e.g., a refined form of utilitarianism) is essential. Finally, coexistence with a superior intelligence may be intrinsically hard. The speaker notes that repeated attempts to envision a mutually acceptable arrangement have failed. Perhaps the best outcome is that ultra-intelligent machines decide they cannot stay here without us losing our autonomy and therefore depart, leaving humans intact and calling on us only when absolutely needed. Such a scenario, if it occurs, might be the ultimate sign of AI done right. 

In His Own Words:


Fei-Fei Li

Fei-Fei Li

Computer Scientist and Founder of ImageNet

Fei-Fei Li is known for leading the development of ImageNet, which helped catalyze machine learning approaches to vision recognition, and for being an essential voice shaping the science behind artificial intelligence (AI) today. She offers an extended, thoughtful, and heartfelt memoir in this book.1 Beautifully written and grounded in many rich, thought-provoking observations, the book describes her journey from being a child immigrant from China to her present position as a Stanford professor.

From the Book “The World I See: Curiosity, Exploration, and Discovery at the Dawn of AI”