Futures in Context
Michael Santoro
Professor of Management & Entrepreneurship at Santa Clara University
Michael A. Santoro is Professor of Management and Entrepreneurship at Santa Clara University and Director of the Business & Human Rights Lab. His recent work focuses on AI governance, accountability, and human rights, including widely discussed writing on the “oversight fallacy” in public-sector AI systems. He is co-founder and former co-editor-in-chief of the Business and Human Rights Journal (Cambridge University Press) and writes frequently on business ethics, corporate governance, healthcare, and the societal implications of artificial intelligence.
You’ve identified what you call the “denominator problem”—a fundamental flaw in how we measure AI deployment in societies. What is it, and why does it matter so much?
The concept is elementary—which is part of why I’m still surprised I’m the one saying it. A numerator is the count of observed harms: incidents, failures, adverse events. A denominator is the total number of opportunities for those harms to occur. A rate is one divided by the other. Without a denominator, a numerator tells you almost nothing.
If reported AI harms double in a year, that could mean systems are failing more often—or that reporting has improved, that detection is better, or simply that deployment has doubled. Each scenario has radically different policy implications. Without a denominator, they are indistinguishable in the data.
The one domain where this largely works is autonomous vehicles. The numerator is crashes and injuries; the denominator is miles driven, vehicles in operation, and hours of autonomous engagement. Mandatory reporting makes both sides of the equation visible, and you can calculate meaningful rates and compare them across companies and time. That’s what functional safety measurement looks like.
In almost every other domain—from deepfakes to AI-assisted hiring to healthcare—the denominator is elusive or entirely absent. In healthcare, AI is already influencing diagnosis, triage, and treatment across health systems, but no major regulatory body has established a methodology for converting harm counts into rates. We’re counting incidents without measuring the opportunities for harm, and trying to draw conclusions from data that can’t support them.
The denominator problem isn’t a technical detail. It’s the foundation on which every subsequent question about AI safety, accountability, and equity depends. And the fact that we’re only beginning to name it tells you how preliminary this conversation still is. We’re just starting to ask the right questions. And until we solve the denominator problem, any framework for democratic oversight of AI will be built on sand.
How can we ensure that the global deployment of AI is grounded in democratic principles and human rights, and who needs to be part of that conversation?
I think the starting point is to recognize that no single discipline can adequately understand or govern AI. What we’re dealing with is a system that operates simultaneously at technical, institutional, and social levels. That means the conversation has to be genuinely interdisciplinary.
One way I think about it is in terms of different “buckets” of knowledge. At one end, you have people working on law, policy, ethics, and journalism—those focused on institutions, rights, and social impact. At the other end, there is a relatively small group of highly specialized technical experts working at the frontier of AI systems. Between these extremes is a much larger space of interaction, and that’s where most of the meaningful work needs to happen.
The problem right now is not that technical expertise dominates—it’s that the conversation is too often siloed. Technical communities are not always equipped to assess societal impact, and policy or legal communities often lack a working understanding of how these systems actually function. As a result, accountability is weakened on both sides.
The work that first brought me into this field was on algorithmic bias in medical appointment scheduling. The systems were designed to predict no-show risk and overbook accordingly, but in practice, they were sending Black patients to the least desirable slots—a technically accurate prediction producing a deeply inequitable outcome. The technical fix was to decouple the prediction from scheduling decisions and make the algorithm race-aware. But what actually made the work possible was the composition of the team—ethicists, operations researchers, computer scientists, and leaders from organizations like the Black Women’s Health Imperative, whose knowledge of the affected community was indispensable. Without that, we would not have been able to see the problem clearly, let alone solve it. The paper went on to win the Best Paper Award in a “Financial Times 50” journal, but I’d argue the methodological lesson is more important than the recognition: Technical fluency alone could not have caught this, and neither could ethical insight alone.
So when we talk about democratic governance of AI, it isn’t just a question of regulation. It’s a question of who is able to participate meaningfully in understanding and shaping these systems. And that, in turn, requires a broader distribution of fluency—not expertise in a narrow sense, but enough understanding across disciplines to engage with the technology and hold it accountable.
In the end, this is about democratic oversight. And democratic oversight only works if people understand what’s at stake, what the relevant levers are, and how their values can be translated into systems that are actually governable.
But that conversation is increasingly shaped by geopolitical competition. What is really at stake in the AI race and what does it reveal about the kind of society we’re trying to build?
There’s a competition underway between the United States and China to dominate AI, and both countries want to win. But before we talk about who’s ahead, I think the more important question is what the race is actually about—because I don’t think most people understand it correctly.
The instinct is to think about AI in terms of isolated applications: automating a task here, improving a process there. But the place where the gains with AI become truly transformative is not in isolated applications—it’s in what’s called enterprise AI. The term comes from enterprise software, and the intuition is similar: You’re not just using AI in one function, you’re integrating it across an entire organization—the janitor, the billing department, the doctor making a clinical recommendation—and then across interconnected organizations. The efficiency gains from automating individual jobs are modest at best. The gains become exponential when AI is woven through entire systems simultaneously. That’s the real game, and that’s what the competition is actually about.
If what I’m describing has validity—if AI does deliver on even a fraction of its promise—we will be a society richer than anything we’re currently imagining.
Now, the surface reading is that China has the advantage here. A centrally planned economy can mandate cooperation between the military, state enterprises, and private companies without friction. The United States, by contrast, is so concerned about data security that different agencies sometimes can’t share information with each other. But I take a longer view. Historically, free markets have outperformed centrally planned economies. The competitiveness and decentralization of the U.S. system may ultimately produce better outcomes in implementing enterprise AI than top-down coordination can.
But here’s what I think we can’t afford to lose sight of in this race: Why are we doing this? If what I’m describing has validity—if AI does deliver on even a fraction of its promise—we will be a society richer than anything we’re currently imagining. Rich enough to provide care for people who are now isolated. Rich enough that someone will always be able to hold the hand of someone who doesn’t have access to medicine. Rich enough to build a more just world. That’s the utopian possibility we have to keep alive, even as we take the risks seriously. Otherwise, why are we developing this technology at all?
At the center of that race is a shift that changes everything: AI is no longer just a tool that supports decisions, it’s becoming a system that makes them. How does agentic AI actually work, and what in our current idea of accountability feels misjudged or simply outdated for these technologies?
This is actually the single most important distinction for understanding why AI is different from every technology that came before it. Before, we had tools. A tool does what you tell it to do. What’s different about AI—and what becomes fully realized in agentic AI—is that it watches how you make decisions, and then it starts making those decisions for you, optimizing them over time. And then, in what you might call the inference phase, it learns from how it’s being applied in the real world, and that trains the model further. What you have is no longer a tool. It’s a system: one that’s continuously evolving based on its own experience.
Once you understand that, the question of accountability shifts entirely. It’s no longer simply whether humans are intervening and controlling the process. The real question is: Where in the system is the most effective place to intervene?
And this is where I think our most common instinct—the “human in the loop”—leads us badly astray. The phrase is usually meant to imply that there’s a human override at the end of the process. But that fundamentally undoes the whole purpose of having AI in the first place. Take driverless cars. We have them because people are generally bad drivers. When the system is working correctly and has the right data inputs, the last thing you want is a driver saying, “I know you’re telling me to swerve left, but I’m going to keep going straight.” That override defeats everything driverless cars are trying to accomplish.
The same principle applies in far more serious contexts. In the use of lethal weaponry, the question of who or what is making a targeting decision in an uncertain situation—say a white van that could be carrying civilians or combatants—is one of the hardest problems in ethics and governance. And our instinct is to say: The higher the stakes, the more we need a person to intervene at the end. But it’s precisely the opposite. The higher the stakes, the more important it is to have built the right values into the system from the beginning—and to identify the right points of human intervention within the process, not just at the end of it. An individual soldier under stress is actually less capable of incorporating the ethical values that determine what risks to a civilian population are tolerable.
What we do need at the end is oversight: humans using technology to monitor whether the model is working correctly, whether the data inputs are right. Because those are two very different problems: Is the system failing, or is the data wrong? Each requires a different response. But a human veto at the final moment is not accountability. It’s a false comfort that gives us the feeling of control without the substance of it.
Underneath all of this, there seems to be a deeper anxiety—one that goes beyond jobs or efficiency. What is this anxiety about?
I think the anxiety surrounding AI is real, and I don’t think it’s exclusively about jobs. It goes much deeper than that. It’s fundamentally about identity. So many of the functions we associate with being a person—deliberation, memory, judgment—are now being mediated, if not replaced, by AI. And that raises a question that is genuinely philosophical: What will it mean to be human?
I’ve been writing about human rights in China for decades, and one of the things that work has taught me is that the concept of personhood is not universal. In the West, we understand ourselves as autonomous beings—defined by our own reasoning, choices, and individual will, standing apart from the relationships around us. In a Confucian culture, as in China to this day, identity is always relational. You’re the son of someone, the worker of somewhere. You’re defined primarily by your place within a web of relationships, not by any individual separateness.
So many of the functions we associate with being a person—deliberation, memory, judgment—are now being mediated, if not replaced, by AI. And that raises a question that is genuinely philosophical: What will it mean to be human?
What I argue is that if cross-cultural identity is possible—if different societies can understand personhood in fundamentally different ways—then it is at least plausible that AI will produce an identity shift in all of us. That it will change what it means to be a person, in ways we can’t yet fully see or understand the implications of.
And I think that’s what underlies a great deal of the anxiety we feel. It’s not just “the robot is taking my job.” It’s something closer to “this is who I am.” I’m the person who learned how to do this, who can do it well, who is needed because of it. That sense of being needed—of being irreplaceable—is part of how we understand ourselves. We spend a lifetime becoming someone—acquiring the skills, the judgment, the standing that make us recognizable to others and to ourselves. When a machine can do that work, what’s threatened isn’t only a paycheck. It’s the quiet conviction that our particular existence matters, that we are necessary in some specific way that no one else could fill. AI is putting pressure on that conviction in ways that are profound and unsettling.
A related anxiety, running alongside the identity question, is that we cannot control these machines—they’re too fast, too opaque, too capable of acting beyond our line of sight, and the instinct is to pull the plug. But that instinct misreads the situation. The architecture of control, as I said earlier, is largely tractable. What’s underneath that urge is something more existential: the worry that we have set something in motion and are watching it from the outside. The cure for this anxiety isn’t rejecting AI technology altogether. We still set the objectives, define the constraints, and bear responsibility for what these systems do.
In the end, what AI is forcing on us is not a technical question but a human one. It’s asking us to say, more clearly than we’ve ever had to, what we want to remain ours—what we want to keep doing ourselves, what we want our lives to mean when much of what once required us no longer does. We don’t yet have an answer. But that, I think, is the question we should be grappling with.