Technology
The rise of large-scale artificial intelligence drives us to confront one of the defining questions of our time: how can AI elevate the human experience rather than undermine it? As AI increasingly reshapes how we communicate, learn, govern, and imagine our future, the stakes are immense. This year, we turn to cultural leaders whose insights illuminate AI’s deep entanglement with human society and guide us toward building technologies that merge advanced intelligence with the qualities that make us human. From foundational concerns to applied advances, a deeper conversation is emerging that goes beyond power and performance to examine the future we are building. Leading voices in AI are converging on a clearer understanding of what this technology is and how it will influence the world around us.
Dario Amodei urges us to see AI not as an engineered machine but as a “grown” system to be cultivated with care, grounded in safety and long-term values. His perspective echoes Stuart Russell’s concern that, while scaling up models may improve performance, it does little to deepen our understanding of how these systems function, potentially leading to unpredictable outcomes and fragile, unsustainable trajectories. From an applied perspective, Andrew Ng highlights agentic systems that think, reflect, and collaborate, aiming to extend human capabilities rather than replace them. Bringing these insights into a broader frame, Fei-Fei Li reminds us that AI is not only a technical force but also a cultural one. As machines learn to see, speak, and respond in increasingly human ways, they challenge our ideas of truth, trust, and meaning.
Together, these voices remind us that AI’s future isn’t just a technical challenge — it’s a human one, shaped by those who design, deploy, and are ultimately affected by it. The message is not one of fear but of responsibility. If AI is a mirror, it reflects not only what we know but who we are and who we wish to become. Artificial intelligence is not just a tool or a challenge; it is an evolutionary threshold. How we meet it will define the next chapter of human innovation.

Dario Amodei
CEO and cofounder of Anthropic
Dario Amodei is an American artificial intelligence researcher and entrepreneur. He is the co-founder and CEO of Anthropic, the company behind the large language model series Claude AI. He was previously the vice president of research at OpenAI.
Editor’s Note on the Interview with Amodei: “The Future of U.S. AI Leadership” by the Council on Foreign Relations
Dario Amodei, CEO and cofounder of Anthropic, previously led research at OpenAI and Google Brain, helping develop GPT-2 and GPT-3. He left OpenAI in late 2020 to found Anthropic as a mission-driven, public-benefit corporation. He and his cofounders believed that “scaling laws” — the idea that simply increasing computation and data greatly improves AI capabilities — would lead to powerful and unpredictable AI systems, requiring strict safety measures.
Anthropic — a mission-driven public-benefit corporation — was founded on the belief that “scaling laws,” the idea that simply increasing computation and data significantly enhances AI capabilities, would eventually lead to powerful and unpredictable systems. This conviction led to a strong focus on safety, including major investments in “mechanistic interpretability”: a field that explores a model’s internal workings to help developers understand why it behaves the way it does. Company’s founders also pioneered “constitutional AI,” a method of training models based on explicit principles, and introduced a “responsible scaling policy” inspired by biosafety levels.
Their goal: ensure that as models grow more capable, Anthropic implements rigorous controls to prevent misuse particularly in bioweapons, cyberattacks, or other harmful applications. Amodei outlines AI’s extraordinary economic and societal potential. He expects progress in areas like biology and healthcare, with AI accelerating breakthroughs against complex diseases such as cancer and Alzheimer’s. However, he also fears significant job displacement as AI systems eventually learn to perform most cognitive tasks. Amodei suggests society will need new frameworks to preserve human meaning, noting that economic productivity alone should not define people’s worth.
National security is a key concern. Export controls on advanced chips, he argues, are essential to prevent adversaries from building near equivalent AI. While smaller models can be trained cheaply, frontier models require massive computing power, making hardware access critical. Amodei supports U.S. government testing of AI systems for dangerous capabilities, improved industrial security to prevent espionage, and a stable energy supply so large datacenters can be built domestically or in allied nations.
Society will need new frameworks to preserve human meaning, noting that economic productivity alone should not define people’s worth.
He sees only partial scope for U.S.-China cooperation on AI due to intense economic and military competition. Still, areas like preventing autonomous AI weaponization might be worth a limited dialogue. Amodei also hints at the importance of investigating AI “experience” or “sentience,” suggesting we should be prepared to handle unexpected ethical dilemmas if future systems display human-like cognition. Asked what remains uniquely human in an AI-dominated world, Amodei highlights relationships, moral obligations to others, and the drive for achievement none of which he believes are diminished simply because a machine can surpass human intellect.
In His Own Words:

AI models are very unpredictable. They’re inherently statistical systems. One thing I often say is we grow them more than we build them. They’re like a child’s brain developing. So controlling them, making them reliable, is very difficult. The process of training them is not straightforward. So just from a systems safety perspective making these things predictable and safe is very important. And then, of course, there’s the use of them, the use of them by people, the use of them by nation-states, the effect that they have when companies deploy them. And so we really felt like we needed to build this technology in absolutely the right way. So I’ll give a few examples of how we’ve really displayed a commitment to these ideas.
One is we invested very early in the science of what is called mechanistic interpretability, which is looking inside the AI models and trying to understand exactly why they do what they do. One of our seven cofounders, Chris Olah, is the founder of the field of mechanistic interpretability. This had no commercial value, or at least no commercial value for the first four years that we worked on it. But nevertheless, we had a team working on this the whole time in the presence of fierce commercial competition, because we believe that understanding what is going on inside these models is a public good that benefits everyone. And we published all of our work on it so others could benefit from it as well. Another example is we came up with this idea of constitutional AI, which is training AI systems to follow a set of principles instead of training them from data, or from mass data or human feedback. This allows you to get up in front of Congress, and say: “these are the principles according to which we trained our model.”







Andrew Ng
Founder and executive chairman of Landing AI
Dr. Andrew Ng is a globally recognized leader in AI (Artificial Intelligence). He is Founder of DeepLearning.AI, Executive Chairman of LandingAI, General Partner at AI Fund, Chairman & Co-Founder of Coursera and an Adjunct Professor at Stanford University’s Computer Science Department. As a pioneer in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, and has authored or co-authored over 200 research papers in machine learning, robotics and related fields. In 2023, he was named to the Time100 AI list of the most influential AI persons in the world.
Key Takeaways from the Microsoft Build 2024 Keynote: “The Rise of AI Agents and Agents Reasoning”
Where are the biggest opportunities in AI right now? While much of the focus is on new models and infrastructure, the real value comes from building applications that use AI.
Key Ideas
• AI is like electricity: a general-purpose technology with endless uses.
• Most attention is on models and infrastructure, but applications are where the most value will be created.
Faster AI Development
• In the past, building an AI system could take 6–12 months.
• Generative AI now allows teams to build and test prototypes in days.
• This leads to faster experimentation and quicker product development.
Agentic AI
• Ng’s top trend to watch is agentic AI-AI systems that can plan, reason, revise, and act step-by-step.
• Instead of doing a task all at once, agentic AI works in loops: planning, testing, and improving.
Four Common Patterns in Agentic AI
• Reflection – AI reviews and improves its own work.
• Tool Use – AI calls on tools or APIs when needed.
• Planning – AI breaks down complex tasks into steps.
• Multi-agent Collaboration – AI takes on multiple roles to complete a task more effectively.
Visual AI
• Ng showcased demos where AI agents analyze images and videos.
• Examples include counting players on a field, identifying goals in soccer videos, and describing clips.
• These tools can generate code and metadata, helping companies make use of large collections of visual data.
Emerging Trends to Watch
• Faster text generation (thanks to hardware/software advances).
• AI models tuned for tool use, not just answering questions.
• Growing importance of handling unstructured data (like images and video).
• Visual AI is still early, but it’s starting to unlock a lot of value.
It’s a great time to build with AI. Generative and agentic AI are making development faster and easier, opening up new possibilities that didn’t exist even a year ago.
In His Own Words:





The one trend I’m most excited about is agentic AI workflows. When I started saying this it was a bit of a controversial statement but now the word AI agents has become so widely used by technical and non- technical people is becoming a hype term. So let me just share with you how I view AI agents and why I think they’re important.
The way that most of us use large language models today is with what is called “zero shot prompting,” and that roughly means we would ask it to write an essay or write an output for us by going from the first word to the last word all in one go without ever using backspace, just right from start to finish. And it turns out people don’t do their best writing this way.
Well here’s what an agentic workflow it’s like. To generate an essay we ask an AI to first write an essay outline. We ask to do some web research and download some web pages and put the result into the context. Then we ask to write the first draft and then read the first draft and critique it and revise the draft and so on. This workflow looks more like doing some thinking or some research and then some revision and then going back to do more thinking and more research. By going round this loop over and over it takes longer but this results in a much better work output. In some teams I work with we apply this agentic workflow to processing complex tricky legal documents or to do health care diagnosis assistance or to do very complex compliance with government paperwork. Many times I see this workflow drive much better results. And indeed, it turns out that there are benchmarks that seem to show agentic workflows deliver much better results.







Stuart Russell
Professor at University of California, Berkeley
Stuart Russell received his B.A. with first-class honours in physics from Oxford University and his Ph.D. in computer science from Stanford. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences. He is co-chair of the World Economic Forum Council on AI and the OECD Expert Group on AI Futures. His research covers a wide range of topics in AI including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations.
Editor’s Note on Russell’s Speech at World Knowledge Forum: “The Ethics of AI”
The long-standing goal of AI-creating machines more intelligent than humans raises a fundamental question: “What if we succeed?” Achieving true Artificial General Intelligence (AGI) would be a civilization-scale event, rivaling humanity’s greatest milestones. Yet the field, from its 1940s beginnings, seldom asked what that success would mean for humankind.
They argue that current AI systems, especially large neural networks (“giant black boxes”), don’t really match this AGI goal. Deep learning relies on trillions of adjustable parameters and enormous training sets-akin to “breeding ever-bigger birds” rather than building a plane from known aerodynamic principles. Despite striking advances (e.g., machine translation, AlphaFold for protein structures, massively faster simulations, generative design, and the famous AlphaGo victory), major gaps remain. For example, self-driving cars are still error-prone after decades of work, large language models fail at basic arithmetic, and the superhuman Go programs were recently shown to have exploitable blind spots.
The long-standing goal of AI-creating machines more intelligent than humans raises a fundamental question: “What if we succeed?”
Looking ahead, some researchers believe merely scaling these systems another hundredfold could lead to genuine AGI by 2027 or soon thereafter. Investments are colossal, dwarfing the Manhattan Project or the Large Hadron Collider budgets. But it is unclear whether bigger models alone will yield real understanding and mastery of intelligence or if the entire effort might collapse in a spectacular “bubble burst,” bigger than any previous “AI winter.”
If genuine AGI does emerge, the potential upside is massive. A superintelligent AI could, in principle, replicate and expand the best of human civilization on an unprecedented scale, possibly increasing global GDP tenfold or more. Yet there is also an extreme downside: human extinction or a world where humans are “infantilized,” reduced to passive dependents while superintelligent machines run everything. Is that what we want? In the speaker’s view, this is not a niche ethical question but common sense: we must ensure humanity’s survival and autonomy if we create entities more powerful than we are.
Russell proposes that any AI system’s sole objective should be to further human “preferences,” meaning all the things we care about now and in the future. Crucially, the AI must acknowledge it does not fully know our preferences in advance and must learn them in a provably safe manner.


This approach faces deep philosophical dilemmas: people’s preferences can be manipulated by oppressive social structures, and disagreements between billions of individuals are inevitable. Nevertheless, some framework for preference aggregation (e.g., a refined form of utilitarianism) is essential. Finally, coexistence with a superior intelligence may be intrinsically hard. The speaker notes that repeated attempts to envision a mutually acceptable arrangement have failed. Perhaps the best outcome is that ultra-intelligent machines decide they cannot stay here without us losing our autonomy and therefore depart, leaving humans intact and calling on us only when absolutely needed. Such a scenario, if it occurs, might be the ultimate sign of AI done right.
In His Own Words:





The goal of AI has always been to create machines that exceed human intelligence along every relevant dimension. Nowadays we call that AGI or artificial general intelligence. What we failed to do for most of the history of the field is to ask a very important question: what if we succeed in that goal? What could possibly go wrong if we introduce a new class of entities — a new species, if you like — that is more intelligent than us?
Let’s compare our present situation with the beginning of airplanes’ technology: do we have the Wright brothers’ version of AGI? I am pretty convinced that the answer is “no,” because we haven’t the faintest idea how our version of AI works. The Wright brothers did have a pretty good idea of how their airplane worked because they put it together themselves. They figured out how big the engine needed to be for the machine to go fast enough to lift and stay off the ground. They did all the basic calculations and so they had a pretty good idea before they even flew it that it was going to fly.
But the AI systems we have now are giant black boxes: we do about a trillion, trillion, trillion small random mutations to those elements until the thing behaves approximately intelligently. It is as if the Wright brothers instead of designing and building an airplane had actually decided to go into the bird breeding business and breed larger and larger and larger birds until they bred a bird that was big enough to carry passengers. The FAA would never certify the giant bird. They would say: “your bird is still eating people and is still dropping them in the ocean and we don’t know how it works and we don’t know what it’s going to do so we’re not going to certify it.”
That’s sort of where we are right now with AI. It is not like an airplane, but like a giant bird. And in my view the giant birds will probably never get big enough to carry hundreds or thousands of passengers, and they will probably never go faster than the speed of sound.
We need further breakthroughs, both in terms of capabilities and in terms of understanding: because capabilities without understanding are really of no use to us.







Fei-Fei Li
Computer Scientist and Founder of ImageNet
Fei-Fei Li is known for leading the development of ImageNet, which helped catalyze machine learning approaches to vision recognition, and for being an essential voice shaping the science behind artificial intelligence (AI) today. She offers an extended, thoughtful, and heartfelt memoir in this book.1 Beautifully written and grounded in many rich, thought-provoking observations, the book describes her journey from being a child immigrant from China to her present position as a Stanford professor.
From the Book “The World I See: Curiosity, Exploration, and Discovery at the Dawn of AI”
But this is what science has always been. A journey that only grows longer and more complex as it unfolds. Endlessly branching paths. An ever-expanding horizon.





At the heart of this technology–one that routinely seems like absolute magic, even to me–is yet another lesson in the power of data at large scales. And to be sure, “scale” is the operative word. For comparison, AlexNet debuted with a network of sixty million parameters just enough to make reasonable sense of the ImageNet data set, at least in part while transformers big enough to be trained on a world of text, photos, video, and more are growing well into hundreds of billions of parameters. It makes for endless engineering challenges, admittedly, but surprisingly elegant science. It’s as if these possibilities were waiting for us all along, since the days of LeCun’s ZIP code reader, or Fukushima’s neocognitron, or even Rosenblatt’s perceptron. Since the days of ImageNet, all of this was in there, somewhere.
Large language models, even the multimodal ones, may not be “thinking” in the truest, grandest sense of the term and, lest we get too carried away, their propensity for absurd conceptual blunders and willingness to confabulate plausible sounding nonsense makes this fact easy to remember.
Still, as they generate ever more sophisticated text, images, voice, and video to the point that a growing chorus of commentators are sounding the alarm about our ability to separate truth from fantasy, as individuals, as institutions, and even as societies it isn’t always clear how much the difference matters. It’s an especially sobering thought when one realizes that this–all of this–is barely version 1.0. On and on it goes. Algorithms expressing themselves at an effectively human level of sophistication. Robots are gradually learning to navigate real environments. Vision models being trained not merely on photographs, but through real-time immersion in fully 3D worlds. AI that generates as fluently as it recognizes. And, rising up all around us, ethical implications that seem to reach deeper into human affairs with every passing moment.
But this is what science has always been. A journey that only grows longer and more complex as it unfolds. Endlessly branching paths. An ever-expanding horizon. New discoveries, new crises, new debates. A story forever in its first act. The future of AI remains deeply uncertain, and we have as many reasons for optimism as we do for concern. But it’s all a product of something deeper and far more consequential than mere technology: the question of what motivates us, in our hearts and our minds, as we create. I believe the answer to that question-more, perhaps, than any other-will shape our future. So much depends on who answers it.
As this field slowly grows more diverse, more inclusive, and more open to expertise from other disciplines, I grow more confident in our chances of answering it right. In the real world, there’s one North Star-Polaris, the brightest in the Ursa Minor constellation. But in the mind, such navigational guides are limitless. Each new pursuit-each new obsession-hangs in the dark over its horizon, another gleaming trace of iridescence, beckoning. That’s why my greatest joy comes from knowing that this journey will never be complete. Neither will I. There will always be something new to chase. To a scientist, the imagination is a sky full of North Stars.




