Translated's Research Center

A Tech-Optimist Global South

Payal Arora - Professor of Inclusive AI Cultures at Utrecht University - compares the West’s pessimistic paralysis around AI with the tech optimism of the Global South.


Trends

Backstage at the 2019 Copenhagen Tech Festival, I waited to deliver my talk as Chris Messina—the inventor of the hashtag—stood before the crowd, confessing his regret. He was on a penance tour. Dismayed by the hashtag’s viral toxicity, he vowed to make amends. His motto: “You break it, you fix it. The Valley is in the grip of a crisis of faith—a culture once synonymous with progress now mired in self reproach. Many AI pioneers today have become evangelists of remorse,1 seeking redemption for technologies they helped unleash. Doom-prepping is the new trend: billionaires build luxury bunkers, what LinkedIn co-founder Reid Hoffman calls “apocalypse insurance.”2

We in the West are living through what I call “pessimism paralysis”3—a collective despair toward the digital so deep it borders on impotence. Meanwhile, the rest of the world leans in. In my two decades of fieldwork across the Global South—from favelas in Brazil to classrooms in India—I’ve witnessed excitement, not fear. People are experimenting with AI not because they are naive but because they are rational optimists: they see a new tool to confront old problems they face daily–corruption, poverty, joblessness, injustice. Pessimism is a privilege for those who can afford to despair. Tech optimism is not blind faith in machines; it is faith in ourselves—in our ability to adapt, create, and act with care, bending AI to our will. Perhaps it is time the West learned from the rest of the world how to approach technology with grounded hope.


People across the Global South are experimenting with AI not because they are naive but because they are rational optimists: they see a new tool to confront old problems they face daily–corruption, poverty, joblessness, injustice.


Paternalism Pathology

When I ask my media studies students about AI, the responses are often sharp: harmful, extractive, deceptive. Some want to opt out entirely. With Big Tech, they don’t want a seat at the table—they want to blow up the table. Universities have built this ecosystem of fear. We teach resilience instead of reinvention, compliance instead of curiosity. Students now sign “Scientific Integrity Awareness Statements” pledging how they’ll use generative AI—turning educators into enforcers and learners into risk managers.

We preach resistance to AI for the Global South while drafting our manifestos on MacBooks and scheduling our revolution on Google Calendar.4 At a global conference, I asked a room of self-declared Marxists to raise a hand if they didn’t use Microsoft, Apple, Google, or Meta—not a single hand went up. We enjoy the luxury to romanticize alternatives we ourselves don’t live by, while we induce guilt and shame in those we claim to protect for using these tools for their self-actualization and mobility. Even in global tech policy circles, anxiety dominates. At a recent think tank meeting where I sit on the board, an aid-agency veteran asked whether “connecting the unconnected” fuels the data extraction machine. The paternalism was familiar: those of us in the West are assumed more resilient than people less connected in Burundi or the Central African Republic, who must be “protected.” But that logic forecloses possibility.


Imminent Research Report 2026

Imminent Research Report 2026

A journey through the next generation of AI —the moment when machines begin to learn from and interact with the real world in real time.

The ultimate resource for understanding the next phase of AI innovation. Written by a global, multidisciplinary community. Designed to navigate a world where machines mature, learn from experience, and retain memory—while humans remain responsible for how AI acts in the world, and language continues to be the primary way intelligence is created.

Get Your Copy Now

Experience in AI as Grounded Hope

AI is scripted through a binary—salvation or apocalypse, genius or villain. But AI is not one thing; it is plural, embodied, and local. It quietly transforms lives. Experience in AI forces this turn by situating intelligence in specific environments, where culture and context become part of the system rather than afterthoughts. When AI is shaped by place, it opens the door for diverse imaginations to determine its purpose, form, and meaning. This is already taking shape across the Global South—not as speculative technologies but as intelligence embedded in clinics, streets, and safety infrastructures.

In Nairobi’s Kibera, community health workers now use AI-guided diagnostic tools that pair smartphones with low-cost test readers, turning the diagnostic kit itself into a decision-support system rather than a passive strip.5 In the Philippines, humanitarian responders and the Red Cross work with FloodTags to fuse hydrological sensors and real-time social data into hyper-local flood alerts that reach people on the move, proving that early-warning “intelligence” must live in the street, not just the cloud.6 In Mexico, gender-violence initiatives such as Mujer Segura7 and Centro-i’s8 AI initiatives link chat-based reporting, risk scoring, and local service routing into the phones women already carry—embedding safety into infrastructure rather than outsourcing it to platforms. In India, for instance, an AI tool predicted monsoon rains up to 30 days in advance,9 helping 38 million farmers decide what and when to plant. And in Brazil, one of the world’s most litigious nations, judges are turning to AI to clear backlogged cases,10 even as lawyers deploy it to file new ones.

What sets these examples apart is not technical novelty but design orientation: AI is treated as a collaborator, not a savior. Optimism here functions as a design ethic—one that assumes communities are capable of co-shaping machines, that embodiment is not a hardware choice but a cultural contract. When intelligence is situated—in soil, sea, street, and settlement—the meaning of “AI for good” stops being abstract.
Reclaiming tech optimism doesn’t deny the harms—it widens the horizon of possibility. An Argentine filmmaker at an AI Film Festival told me he once felt guilty using AI, haunted by its environmental cost and the weight of public shame. Then, one day, he stopped apologizing. “I felt light,” he said. “Like a burden was lifted.” Through AI, he gets to tell the story of his people—and sees in it a tool that democratizes the creative industry long reserved for the elite.
The problem is not a lack of vision—it is that we keep searching in the same places. For much of the world, technology is a negotiation with constraint. People improvise with bandwidth, share devices, pool data, and build networks of care. These aren’t stories of scarcity but of ingenuity. If Silicon Valley’s optimism is about scale, theirs is about survival.

The American Dream is being lived elsewhere

A 2023 Stanford University AI Index11 found optimism about AI’s benefits high in countries like China (83%), Indonesia (80%), and Thailand (77%), but significantly lower in Canada (40%), the U.S. (39%), and the Netherlands (36%). In Nigeria, young people tie optimism to social change. Olasupo Abideen of Restless Development12 explains the Nigerian Dream: “we are always optimistic, focused, and energetic because we are trying to change the narrative about our country.” On the contrary, people in the United States are becoming increasingly skeptical of the American Dream.13 Only 38% of Democrats under 50 still believe in social mobility—mirroring their view of AI as yet another system that reinforces, rather than reduces, inequality.

Even refugees—one of the world’s fastest-growing14 and most routinely dehumanized populations—are part of this redistribution of optimism. In my yearlong project with the UN Refugee Agency in Brazil,15 we found that refugees wanted to desperately opt in, not out, of digital opportunities. We found not fatigue or retreat but a fierce desire to be visible. They don’t want to be spoken for. For instance, Lucía, a thirty- year-old woman, traveled to Brazil due to health issues and the lack of medical access in Venezuela. She suffered from endometriosis and needed surgery. She wants to tell her story on Instagram and TikTok as she believes it will give others like herself strength and spiritual support when they realize they are not alone: “My dream is to make videos because my story is hard to tell. I went through five surgeries and God lifted me up. Many people are in the same situation. I had two heart attacks and God raised me up and many things happened… There are things I want the world to understand.” AI-assisted tools help refugees like Lucía,16 who are video-production amateurs, to script, edit, and circulate their own narratives, not as humanitarian footnotes but as protagonists with agency. If the American Dream once promised self-definition, many now find that possibility elsewhere—at border camps, in urban peripheries, and in WhatsApp and TikTok feeds where dignity is reconstructed in pixels and subtitles rather than policy. Rational optimism is not naive; it is a grounded belief that human creativity, given the right conditions, can produce collective good. It is not blind faith in technology but a commitment to learn from those who make it work under imperfect conditions. Optimism becomes an ethics of attention: to possibility, to plurality, and to the many worlds that make technology whole again. The future of AI will be defined not by how fast it scales, but by how well it listens—to cultures, to contexts, to communities.

Payal Arora

Payal Arora

Payal Arora is Professor of Inclusive AI Cultures at Utrecht University and co-founder of the Inclusive AI Lab. She is a leading digital anthropologist with expertise in researching user experience in the Global South to help shape inclusive AI enabled designs and policies. Arora is the author of 100+ journal articles and award-winning books including The Next Billion Users with Harvard Press. Her new book with MIT Press From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech has been longlisted for the 2024 Porchlight Business Book Awards and won the Silver Medal by Axiom Business Book Awards 2025. Forbes named her the ‘next billion champion’ and the ‘right kind of person to reform tech.’ She has been listed in the 100 Brilliant Women in AI Ethics 2025 and won the 2025 Women in AI Benelux Award for her work on Diversifying AI. 250+ international media outlets have covered her work including the Financial Times, Fast Company, Wired, Al Jazeera, The Economist, and TechCrunch.

References

  1. Nature Editorial Board. “Stop talking about tomorrow’s AI doomsday when AI poses risks today.” Nature, June 27, 2003.
  2. BBC News. “Tech billionaires seem to be doom prepping. Should we all be worried?” BBC News, October 10, 2025.
  3. Arora, Payal. From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech. Foreword by Charles Hayes. Cambridge, MA: The MIT Press, 2024.
  4. Dan McQuillan, Resisting AI: An Anti-fascist Approach to Artificial Intelligence (Bristol: Bristol University Press, 2022).
  5. Digital Impact Alliance (DIAL), “THINKMD Clinical Decision Support Tool_Kenya,” Digital Impact Exchange.
  6. FloodTags, “Real-time flood monitor for the Philippine Red Cross,” FloodTags (Success stories).
  7. Mujer Segura
  8. Centro-i, “Centro-i para la sociedad del futuro: Inicio,” Centro-i.
  9. The Economist, “AI models ace their predictions of India’s monsoon rains,” Science & Technology, available at:
  10. 10 Business & Human Rights Resource Centre, “Brazil: Courts and lawyers embrace AI, fueling both efficiency and more lawsuits,”
  11. Business – Human Rights, Latest News, September 25, 2025.