Translated's Research Center

Who Gets to Stop the Machine? 

An interview with Arthur Sidney on procurement, institutional leverage, and what emerging AI governance models in Africa reveal about power in the age of AI.

Futures in Context

Arthur Sidney

Arthur Sidney

Public Policy Strategist & Attorney

Arthur Sidney is a public policy strategist, attorney, and former congressional chief of staff focused on AI governance, procurement risk, privacy, and institutional accountability. He advises on technology policy, regulatory strategy, and state and federal affairs. His work examines how governments can retain authority and protect human decision-making as AI systems become embedded in public institutions and critical infrastructure.

These two questions meet at one point: decision-making authority. When we talk about AI governing society, we’re talking about AI shaping hiring decisions, education, access to public benefits—the ways it’s already affecting daily life. When we talk about governing AI, we’re talking about the frameworks meant to regulate it. But the place where both converge is this: Who has the power to stop the system once it’s deployed? Who can override it, who can veto it? 

It’s dangerous to assume that having a framework means you’re covered. A framework on paper is not enough. The real crux is who has the authority to intervene—and ensuring that humans remain in the loop, with someone clearly responsible and able to act when something goes wrong. Many of these questions, it turns out, can be addressed through procurement

Procurement is the contractual process through which governments purchase and authorize AI systems within their borders. And so those contract terms are what ultimately govern AI—how it will be deployed, how it’s going to live and breathe in the country: who has authority, who has the power. It all flows through procurement. Without clear procurement controls, agencies may ultimately be responsible for automated decisions they cannot fully explain, audit, or override.

When you ask whether there are cases of institutions actually exercising the power to stop or override an AI system they’ve purchased, there are very few. The clearest recent example is the Dutch SyRI case, involving a government welfare and tax fraud risk-scoring system. In 2020, the District Court of The Hague found the system unlawful because its opaque data-matching process failed to adequately protect privacy rights, particularly for people in lower-income communities. For that to work—having an institution that actually has the power to stop or override an AI system, you need an institution with both the legal authority and the practical power to act. Who holds that power is the question to watch globally. And right now, it remains deeply problematic.

The first requirement is clear: Humans must remain in the loop, with defined decision-making authority. That’s non-negotiable. But there’s a second requirement that’s just as important and often overlooked: AI systems need to reflect the local culture, language, ideas, and context of the communities where they’re deployed

In fact, countries in the Global South—African countries in particular—need to be careful about adopting systems built in the West. Those systems carry a different social, cultural, and political context, embedded into their design, whether through technical choices or procurement terms. A country shouldn’t be purchasing AI without understanding the consequences: who controls the system, who has authority over it, how much input the purchasing country actually has. Rwanda is a useful example here—a country actively using AI to revamp public institutions, including healthcare, in ways that are genuinely visionary precisely because they’ve asked those questions from the start. That includes efforts to integrate AI into healthcare, agriculture, education, and public administration while simultaneously building domestic technical expertise and local deployment capacity. In healthcare, that includes work around AI-assisted clinical support and health data systems, which matters because the governance question is not only whether AI is adopted, but also whether domestic institutions can understand, supervise, and adapt it. 

It’s much more than a technical problem. It’s about meaning, trust, and governance. When AI systems aren’t trained in local languages, they fail to capture local context, culture, and the categories through which people understand their own lives. Communities end up governed through systems that reflect external assumptions—systems that don’t belong to their own social, political, or cultural world. 



What’s lost isn’t just accuracy. It’s the ability for people to be evaluated and understood on their own terms. And that becomes a governance failure because it undermines fairness, accountability, and the ability to explain decisions to the people affected by them. We’ve talked about the importance of being able to stop or override a system—but it’s equally important that decisions can be understood by the people subject to them. Language underpins everything. It shapes how systems interpret people, communities, and social reality.

Many African countries are moving fast, developing their own governance terms through procurement, vendor agreements, and contract clauses—and making sure they have a say in how AI develops at scale. Rwanda, Kenya, and Morocco are all doing serious work. South Africa is also instructive, though in a more complicated way. Its draft national AI policy showed real ambition around institutional design and inclusive AI governance, but the government later withdrew the draft after reportedly finding fictitious, likely AI-generated references. That episode actually reinforces the broader point: AI governance is not just about writing frameworks. It is about human oversight, verification, institutional capacity, and enforceable accountability. 

In the United States, there is no federal framework. States are moving at increasing speed, but there’s no prevailing national standard. What’s filling the vacuum is executive action, shifting presidential priorities, and—again—procurement. In the absence of legislation, contract terms are becoming one of the primary mechanisms through which the federal government regulates AI. 

To put it plainly: Parts of Africa are ahead of the U.S. in this respect. And I cannot fully explain why the United States has no federal AI legislation—we see the same gap in privacy law. And looking at the current landscape, I don’t expect a federal AI bill to be passed and signed by the president anytime in the near future. 

Democratic institutions have the capacity—but it depends entirely on whether those institutions are strong. The case in the Netherlands is instructive: A court with genuine authority determined that the welfare fraud AI system should not be applied, and there was democratic buy-in. The public accepted it, the government accepted it, and it moved forward. That’s what functioning accountability looks like in practice. 

If you have a strong judiciary and strong institutions, then yes—a democratic society can enforce guardrails when AI goes wrong. AI systems do fail. They hallucinate, produce errors, and generate outcomes institutions may not fully anticipate. The question is whether institutions exist that are strong enough to catch it when it does. Rwanda, again, is worth noting: a country building AI into healthcare and public institutions while simultaneously developing domestic expertise and grappling seriously with questions of privacy. That stands in contrast to the United States, which has enormous technical power and market share—and has not resolved those same problems in a unified way at home. 

The gap is between frameworks and leverage. A wonderful framework on paper, without the teeth to enforce it, is theater—a statement in name only. What needs to shift is control: over procurement, over data localization, over infrastructure, and over local capacity building. 

Procurement is the central mechanism because it determines who builds the system, on what terms, and with what level of local authority. But leverage also comes from market power—and Africa’s is growing. Through coordination via regional bodies like the African Union, African countries can now negotiate as a bloc with the West, the U.S., the EU, rather than each country going in alone. The burgeoning economies, the critical minerals, the explosion of the youth population—all of this is changing the equation.

The structural shift that’s needed is from adopting AI to setting the terms of adoption. Governance only becomes real when institutions not only write the rules, but retain the authority and capacity to enforce them.