The AnewZ Opinion section provides a platform for independent voices to share expert perspectives on global and regional issues. The views expressed are solely those of the authors and do not represent the official position of AnewZ
The story of artificial intelligence in the Global South is often told through its most stable and visible players. Analysts highlight India’s IndiaAI Mission, Brazil’s expanding tech sector, or Saudi Arabia’s drive to weave AI into schools and megacities.
These examples matter, but they capture only part of the picture. They show how AI works where governments are relatively stable and resources are available. They say far less about the millions who live in societies marked not by steady growth but by conflict, displacement, and insecurity.
These fragile contexts rarely make headlines, yet they fill a vast body of reports by international agencies and think tanks, which catalogue humanitarian logistics, early warning systems, and ethical risks in painstaking detail. The result is a strange gap: abundant technical analysis, but little sense of what AI means in daily life. What does it mean for a girl in Afghanistan studying quietly at home, or for a family in Sudan waiting to see if an aid convoy will arrive?
This article follows those lived realities across education, development, and security — showing how AI adapts to instability and, in doing so, reshapes the future.
Education: Hidden Classrooms and Uneven Continuity
Classrooms are often the first institutions to collapse when violence spreads, yet families fight hardest to preserve them. Across fragile parts of the Global South, AI does not appear as sweeping transformation. Instead, it slips into the cracks — sometimes a fragile lifeline, sometimes a risky experiment, always shaped by insecurity.
In Sudan and Burkina Faso, AI is often referenced in donor strategies — mapping schools from satellites or logged in policy reports — but in daily life students rely on radio lessons or itinerant teachers when classrooms are shuttered.
In Afghanistan, the Taliban’s restrictions on girls’ schooling have spurred quiet adaptation through the School of Leadership, Afghanistan’s SOLAx chatbot, which delivers lessons in Dari and Pashto to tens of thousands of students over WhatsApp. For families, these apps are discreet tools of continuity, allowing children to keep learning in the privacy of their homes.
Similar improvisation unfolds in Bangladesh’s Rohingya camps, where the closure of thousands of learning centers left half a million children without classrooms. Aid workers responded with the AprendAI chatbot, delivering lessons over simple phones. The tool does not replace teachers, but it offers families one more thread to stitch into fragile routines.
In Honduras and Mexico, AI sits uneasily alongside gang-related school closures. Some schools experiment with edtech platforms, while in violent districts students share homework through WhatsApp. By contrast, in Jordan, UNICEF’s Learning Passport provides Syrian and Iraqi refugee children with AI-tailored lessons. Parents describe the platform as a reassurance — not because it guarantees outcomes, but because it signals that education persists even in displacement.
Taken together, these examples show that AI in education is shaped less by access than by how insecurity bends its purpose. Sometimes it is a chatbot in a camp, sometimes a WhatsApp lesson taken in secret, sometimes a mapping algorithm that traces schools in conflict zones. It becomes part of the strategies families use to stitch continuity from uncertainty — and, in doing so, to sketch out what their future might look like. For many, the simple act of holding education together is itself an act of future-making.
Development: Algorithms in the Food Line
If education shows AI holding classrooms together, development shows it adapting to fragile economies, aid systems, and daily survival. When institutions falter, people weave AI into coping strategies already in motion.
In Sudan, humanitarian groups use AI to forecast supply disruptions and diagnose X-rays where doctors are scarce. Yet the same instability that makes these tools necessary also undermines them: power cuts stall systems, and armed groups interrupt deliveries. Families fall back on kinship networks, with AI layered on top as a fragile but essential supplement.
In Afghanistan, satellite crop monitoring provides farmers with irrigation advice. Even when direct access is limited, forecasts ripple into markets, shaping crop prices and aid distribution. AI’s presence is indirect but real, filtering into livelihoods shaped by conflict.
The Middle East shows similar adaptation. In Yemen, AI models predict water shortages and cholera outbreaks, giving families and aid workers precious time to prepare. These forecasts cannot prevent crisis, but they provide continuity in the face of breakdown. Across the Gulf, Saudi Arabia integrates AI into megacity projects. The contrast is not absence versus presence, but how instability determines what AI is asked to do: in Yemen, a stopgap for daily survival; in Saudi Arabia, a symbol of long-term planning.
In Bangladesh, AI supports the government’s “Smart Bangladesh” vision, with apps diagnosing crop diseases and health workers detecting tuberculosis through X-rays. But like in classrooms, these systems are tested by floods and cyclones. Each disaster redefines their role, turning tools meant for growth into emergency lifelines.
In Latin America, AI optimizes agribusiness exports in Brazil while in Honduras machine learning forecasts floods in vulnerable neighborhoods. The contrast reflects not a binary of modernity and neglect, but many ways the same technology is bent into either expansion or survival depending on context.
Development, then, is not a story apart from education. Both show how insecurity reshapes technology into daily roles. Whether as a chatbot in a camp, an aid algorithm, or a crop app, AI settles into ordinary practices — forecasting floods, guiding aid, diagnosing crops — less as a sweeping engine of growth than as one more thread stitched into survival strategies. Yet even as it does so, it reshapes how people imagine tomorrow: a farmer deciding when to plant, a family preparing for floods, a doctor diagnosing with limited tools. These small choices bend the trajectory of societies toward futures defined as much by endurance as by ambition.
Security: Safety and Suspicion
If education shows AI holding classrooms together, and development shows it woven into food lines, security reveals its sharpest edge. Here, AI rarely appears as progress alone. It recasts insecurity, sometimes offering protection, sometimes exposure, and often both.
In the Sahel, governments deploy AI systems that scan satellite imagery and social media to anticipate extremist attacks. Villagers do not see the algorithms themselves — only the warnings that a market day may not be safe. Sometimes these alerts save lives; other times, when violence erupts despite them, people fall back on rumor networks. AI here becomes one more thread in survival strategies.
In Afghanistan, this ambivalence is sharper. Families turn to chatbots like SOLAx to preserve children’s learning, while the state expands AI-powered cameras across Kabul. For some, these cameras offer a sense of safety in a war-scarred society. For others, they raise fears of surveillance and control.
The Middle East shows similar contradictions. In Iraq, AI-driven facial recognition reduces some crime, welcomed by some residents but distrusted by others who fear misuse. In Yemen, AI guides drones delivering humanitarian supplies — a rare lifeline in a fractured state — while across the Gulf, the same technologies serve population-wide monitoring. The difference is not lifeline versus repression, but how insecurity and politics shape interpretation.
Latin America echoes this ambivalence at the neighborhood level. In Mexico and El Salvador, predictive policing is promoted against gangs and cartels. Some districts welcome cameras as protection. Others, especially poorer ones, feel profiled instead. A teenager walking past a camera in San Salvador cannot know if it marks him as someone to be safeguarded or suspected.
Security, then, is not an isolated story. It extends the arc traced in classrooms and food lines: AI absorbed into fragile systems, carrying both promise and risk, never detached from instability. For families, students, and communities, AI does not lift them out of insecurity but settles within it — shaping how they navigate fragile forms of safety and uncertainty. And in that fragile navigation, the outlines of the future are drawn.
Conclusion
Artificial intelligence in the Global South is often described as a divide between those who have it and those who do not. But fragile and conflict-ridden contexts show something different. Here, AI is not a marker of access or exclusion. It is a technology bent by insecurity, absorbed into daily life in ways that carry both promise and risk.
In classrooms, it surfaces as chatbots in refugee camps or discreet WhatsApp lessons in Afghanistan — fragile forms of continuity that families hold onto when schools collapse. In development, it appears in crop diagnostics, flood forecasts, and aid supply chains — tools never enough to erase instability, yet indispensable precisely because no alternative exists. In security, it takes on its sharpest edge: cameras and algorithms some experience as protection, others as profiling, and most as both at once.
Across these domains, the pattern is consistent. AI does not sweep away insecurity; it takes its shape from it. Families, farmers, teachers, and communities fold it into survival strategies, weaving it alongside rumor networks, radios, and kinship ties. For them, AI is not a triumph of progress or the absence of it. It is a fragile companion to endurance — and in that role, it is reshaping the future.
In fragile societies, the future is not built in think-tank blueprints or glossy national strategies. It is assembled daily in refugee camps, conflict zones, and unstable markets, where AI is woven into the struggle to maintain continuity and dignity. The story of AI in the Global South is not only about catching up with the Global North. It is about how insecurity itself is shaping what the future looks like — and how technology, fragile yet indispensable, is becoming part of that lived horizon.
Dr. Rachael M. Rudolph is an Assistant Professor of Social Science at Bryant-University-Beijing Institute of Technology, Zhuhai College, Adjunct Professor of Counterterrorism and Cultural Intelligence at Nichols College, and consultant at RMR Consulting Services, LLC.