Translated's Research Center

AI as Civic Infrastructure

An interview with Vilas Dhar on what's truly at stake in deploying AI in the governance of our societies.

Futures in Context

Vilas Dhar

Vilas Dhar

President of the Patrick J. McGovern Foundation

Vilas Dhar is a globally recognized authority on artificial intelligence and society, serving as President of the Patrick J. McGovern Foundation, a $1.5 billion philanthropy advancing AI for public purpose. He has served on the United Nations Secretary-General’s High-Level Advisory Body on Artificial Intelligence, is the U.S. Government’s nominated expert to the Global Partnership on AI, and advises Stanford’s Institute for Human-Centered AI, OECD.AI, and MIT Solve. Named a World Economic Forum Young Global Leader in 2022, he is also a public intellectual whose writing and teaching have reached audiences worldwide, including 750,000 learners of his LinkedIn Learning course.

Today, we usually describe AI as a product or an innovation. That language matters, because it offsets responsibility to a variety of market-based actors: If there are harms, litigation will sort it out; if there are inefficiencies, the market will correct them. What we don’t do is center a human responsibility around what AI creates and causes. And that is the default case. 

The shift to thinking of AI as civic infrastructure is almost deceptively simple. Once we use that language, we inherit a whole set of responsibilities and obligations—the same ones we apply to the systems that sustain our society, our way of life, and our identity. We don’t ask whether a water system should be publicly accountable. We know that it is. We don’t ask whether electrical systems should meet safety standards. Of course they should. We don’t question whether roads should be designed with input from the communities they connect. That’s just how we build them.

This is why the idea is so powerful: By framing technological innovation within a sense of civic responsibility, we change the defaults—not just about who gets to participate in these decisions, but about which questions can be asked in the first place.

I want to acknowledge that this is a genuinely uncomfortable framing. For governments, it requires building the technical capacity to administer this infrastructure in a socially positive way. For companies, it means accepting that what they build is subject to a greater authority—standards equivalent to zoning laws or environmental regulation—which may come at a cost. That’s a real constraint. But when you’re serving the public, it’s also a necessary one. And for all of us, it demands something even more uncomfortable. We don’t just get to be observers of a technological curiosity unfolding around us. We have to own our role in shaping what this society looks like—and we have to actively claim our public agency, our individual right to be part of the conversation.

Civic infrastructure doesn’t have to be the only model. But it’s a useful framework for shifting us away from a narrative in which a few people build technology and sell it to everyone else—toward one in which all of us come together to envision and design the kind of technology-enabled future we want.

Let’s start with the concept that through almost all of modern political history, the idea of governance has been governed by scarcity. Citizens have many needs, governments have very limited resources, and so governments are empowered to decide which needs they can serve and which ones they simply cannot. One of the more positive constructions around how AI might change our world is that, because of the efficiency and productivity gains of AI, we might do a much better job at serving every need of every person in our society.

That’s a very optimistic case, but I want us to hold that optimism as we look at some examples. I’ll give you a first example of an organization that we work with quite closely: Data Género | IMOR AI. It’s an organization in Argentina that works directly with judges inside the judicial system to use AI tools to understand the patterns and stories behind what’s happening in crimes of sexual abuse, of domestic violence, of crimes that at their heart are fundamentally crimes of power.

These tools do two things: They help judges more effectively manage caseloads, increasing the administration of justice itself; and they provide a holistic view of how decisions are being made, making it possible to interrogate whether there’s foundational bias within the system. And what we find, perhaps most importantly, is that many of the assumptions we hold as given—that access to judicial systems is inherently limited, that cases take years to process—are no longer inevitable. We can actually do something much better.



But let’s take on the risk. When governments begin to use AI without public accountability and public architecture, we see the other side of it. We see the capacity to increase the government’s ability to surveil us as citizens and to violate our foundational rights to privacy. We’ve seen AI systems used for automatic sentencing in court cases where individuals don’t even have the opportunity to interrogate a human judge and ask: Why was I sentenced this way? Or scoring systems used to determine who in a community can get access to food assistance or fair housing—with no accountability framework in place.

These are areas where, without a clear accountability framework, governments may use AI in ways that don’t serve the public—and we need to be mindful of that. It goes back to what I mentioned in the first question: the idea of individual agency. We all need to take ownership of moving this forward.

I want to be very clear: I don’t believe ethical AI exists as such. Ethics is an exclusively human domain—it’s something we do when we evaluate moral choices. So let’s not start from the assumption that AI systems will ever be ethical. Instead, let’s try to build a society that’s designed ethically, where AI does what it’s actually capable of.

However, I have been one of the primary architects of thinking about how we build responsible design practices into AI going back decades. For 25 years I’ve been an AI scientist, and one of the things that worries me today is that too often ethics is seen as a way to evaluate a system that has been built. People design a model, build an application layer on top of it, run it through some kind of audit or certification mechanism, check the box, and call it responsible. As an engineer, I’ll tell you: That’s never a successful way to design a system. You have to build the responsibility into the design of the system itself.

What does that mean in practice? Before you write a line of code or do a single training run, you have to ask: If this model is created, who does it affect? What are the outcomes and impacts—not just on its technological environment, but on the human environment in which it will operate? Are we taking into account commentary from the stakeholders who will be the end recipients, beneficiaries, or targets of the system? Are we designing in ways that allow us to address unexpected bias, or the potential for technically correct but socially incorrect outcomes? Do we have pathways for accountability—for somebody to be able to question the system’s decisions?

What this means is a shift in responsibility for responsible design—from those who evaluate systems to the engineers who build them in the first place. That, in turn, requires a very different social approach to how we train engineers. It means going back to universities, high schools, and other places where engineering skills are taught, and ensuring that students are exposed to questions of responsible design. It means that companies need mandates requiring them to apply responsibility in developing new products. And it means that, politically and in terms of public accountability, we need new mechanisms to enforce standards and hold accountable those who build unethical systems and release them into the world without proper oversight.

To begin with the strengths each world brings: The private sector has an extraordinary capacity for technological innovation. Thinking of the public sector or multilateral institutions, unfortunately, innovating technology at that speed and scale is unrealistic. But we need to match the pace of technological innovation with a new capacity for speed in human infrastructure: in our social institutions, our mechanisms of collaboration, and the way we govern these tools.

To do that, we need a new shared vocabulary. Multilateral institutions have a long tradition of deep, sometimes slow deliberation—bringing together many viewpoints to find synthesis and build a shared vision. Technology, by contrast, has largely lived in a space of extreme individualism: companies competing, individuals striving for scientific breakthroughs. Connecting these two worlds takes more than putting them in a room together. We need a way to be able to align the power, the gravitas, the importance of these decisions, and make sure that everybody feels a shared sense of participation and ownership in the outcome.

We don’t necessarily see this at the macro level, but certainly at the micro one. For example, when working with communities, we ask how to ensure that a community stewarding a biocultural landscape at risk from climate change has both the technological capacity to build AI models—showing how rising sea levels might affect their barrier reef—and the social structure, such as a chief who can convene decision-makers, to ask: “If this tool gives us better insight, what would we do with it?” Different languages, shared intention, common outcomes.

However, I’ve seen it at the macro level too. I sat on the United Nations Secretary-General’s High-Level Advisory Body on AI: 38 experts from around the world, engineers and sociologists, historians and technology leaders, trying to build principles broad enough for 193 countries to agree on. And in that process, what I learned is that you have to engage with both worlds simultaneously. You have to be able to understand that even though we speak very different languages, people are motivated by the same things: They want to see a better future, they want to see one that’s prosperous, and they want to hopefully see one that’s quite equitable. Getting there sometimes requires creating a new language.

This is really a question about the epistemic limits of AI governance. For years, I’ve worked with faith-based communities around the world to understand how they think about AI. And there’s a thread that runs through nearly every tradition: stories about what happens when people reach for godlike power. Prometheus, who stole fire from the gods and was punished for eternity. Icarus, who flew too close to the sun. 

I raise this because as you move across the world’s traditions and encounter these stories, you see that we spent thousands of years wrestling with a foundational question: how do we ensure that human hubris doesn’t exceed our capacity for social solutions? In some ways, it feels like over the last 25 years we’ve forgotten many of those stories—and reached for potentially godlike power with AI, while leaving behind the guidelines and moral values we thought would govern this moment.

We see this in the way we approach governing AI. Many of our approaches are grounded in political and moral traditions that are only a few hundred years old, primarily developed in the West, and that center a set of principles and values that are not necessarily inevitable. For example, we often talk about privacy, particularly in the Global North, in terms of individual privacy. We refer to a sense of personhood and the rights that come with agency to be protected from others seeing what we do.



But it’s important to acknowledge that in many communities in the Global South, data is seen as a communal good—that how we are represented is something shared rather than individual. Even in this small example, you can see the tensions that arise if you try to govern health AI purely within a Global North framework, without engaging with how systems have been developed to protect community data in the Global South.

I think it’s very important to acknowledge that we are still in such early phases of the AI conversation. But I think what we know is that simply trying to take what we did 20 years ago and apply it to a rapidly changing world won’t work, unless we are also able to innovate in ways that stretch those boundaries. And maybe this moment that is happening because of AI actually forces us into a more holistic conversation about who we are and how we are represented across humanity—not merely through dominant cultures, but in a way that integrates many different views on how our world should be structured into something that works for all of us.

I’d start with individuals. The foundation has to be basic literacy: not just the ability to use these tools, but an understanding of the moral and structural trade-offs embedded in how they’re designed and deployed. People don’t need to be engineers. But when they encounter an AI system in their daily lives, they should be equipped to ask: Is this right for me? 

At the system level, what I would like to see is governments stepping into AI governance not merely as a matter of regulation—which often means limiting the worst excesses of a few technology actors—but really stepping forward with a positive, comprehensive vision of what they hope our societies can become. Governments have a unique set of tools that are rarely fully used in this phase: from public financing and research and development, all the way through to building social policy that guides how these AI tools might create public benefit for everyone. And I would like to see governments adopt a more active, more positive framing in how AI regulation and governance happen.

And then there is the level of society as a whole—not just institutions of power, but all of us together. And in some ways, this brings us back to where we started. Much of our world today has been shaped by scarcity: The idea that there’s not enough to go around has been used to justify profound inequality and the privileging of some over others. If AI delivers even a fraction of its promise—genuinely creating more for all of us—then we have to ask: If better is actually possible, why are we so content with what we have? Are we willing to accept a world of ever-greater concentration of power and wealth, even when there’s enough for everyone?

The toolkit, ultimately, has to be a set of questions—and the institutions and spaces where we can ask them together, honestly, and build toward a vision of tomorrow that works for everyone.