BLOG POST

OPINION
 
MAR 2026
by Diana Mosquera

The Geopolitics of Artificial Intelligence: Reflections from the AI Impact Summit in India

In February 2026, the global technology landscape turned its attention southward as India hosted the AI Impact Summit, marking a milestone not only because of the event’s scale but also because, for the first time, one of the largest artificial intelligence summits was held outside the traditional centers of power in the Global North, in India. This gathering was not merely a technical conference; it brought together heads of state, tech giants, researchers, and civil society, and even opened its doors to the Indian public. The host government’s message was clear: AI is a strategic priority, and India seeks to establish itself as a key player in its global development.
As a representative of Diversa, I had the opportunity to participate and observe firsthand the narratives that are shaping the present and future of this technology. However, given the sheer scale of everything that took place, an inevitable question arose: Why is AI still mobilizing more resources and geopolitical agendas today than any other technology in history?

The Race for Infrastructure

Beyond the rhetoric of cooperation, the reality is one of fierce competition. AI does not live in the cloud; it lives in physical infrastructure. Countries and big tech companies today are not only competing for algorithms, but for control over:
* Data centers and computing capacity
* Chips, semiconductors, and the critical minerals needed to manufacture them
* Specialized human talent and large volumes of data

This is where the Global South faces its greatest challenge. Historically, our regions have participated in the global economy as suppliers of raw materials. In the age of AI, the risk is that we will continue to repeat this cycle: becoming suppliers of data, energy, and minerals, while consuming technology designed and controlled by others. This dynamic becomes even more evident in the growing competition for physical infrastructure, where investment commitments exceeding $250 billion are being announced, for example:
*Sundar Pichai announced the America-India Connect initiative, with $15 billion for undersea fiber optics and data centers connecting the Southern Hemisphere.
*The Adani Group committed to investing $100 billion by 2035 in digital infrastructure powered by green energy.
*Tata and OpenAI signed a strategic partnership to scale 1 GW data centers and train Indian youth in the use of ChatGPT Enterprise.

The Brazil–India Agreement
In this same context, Brazilian President Lula da Silva and Indian Prime Minister Narendra Modi signed an agreement on critical minerals and rare earth elements. Brazil, which possesses one of the world’s largest reserves of these resources, is positioning itself as a key supplier to India’s semiconductor and AI hardware industries. The agreement aims to diversify supply chains away from China, but it also highlights how the geopolitics of AI are deeply rooted in the control of natural resources.Taken together, these announcements reflect not only an accelerated expansion of AI infrastructure but also a deeper reorganization of global value chains. What is at stake is not merely technological development, but the material conditions that sustain it and who controls them. Without a critical analysis, these dynamics risk consolidating new forms of extractivism, now mediated by data, energy, and digital infrastructure.

Common Goods
But this competition is not limited to physical infrastructure. It also extends to the data and knowledge that power these systems. While the first layer of competition involves physical assets—energy, chips, and data centers the second is more diffuse yet equally strategic: who produces, controls, and benefits from digital resources. In this context, one of the most significant announcements was the launch of the Global AI Impact Commons, a platform aimed at sharing datasets in areas such as health and agriculture. The initiative is presented as an effort to make data more accessible, but it quickly raised key questions.
In the civil society panels, organizations like Creative Commons emphasized a fundamental point: openness is not the same as justice. The fact that data or code is accessible does not guarantee that it functions as a common good, nor that the communities that produce it receive benefits; in fact, one of the most intense debates revolved around this confusion. Open-source tools like PyTorch or scikit-learn form the foundation of the current ecosystem, but their “open” nature does not prevent them from being incorporated into proprietary models. The same is true of large repositories of collective knowledge, such as Wikipedia or free software: value is produced in a distributed manner, but captured in a concentrated way.

More than a problem of access, what is at stake is the lack of structures to support the commons. Without clear mechanisms, “openness” can easily become a new form of extraction.
Therefore, discussing the commons in AI means going beyond mere availability and moving toward participatory governance, where communities have decision-making power over the use of their data and knowledge, while mechanisms of reciprocity ensure that the value generated flows back to those who contribute—as proposed, for example, by the Equitable AI Transition Playbook developed in collaboration with the ILO.

The summit concluded with the adoption of the New Delhi Declaration on AI Impact, presented as a historic consensus backed by 92 countries and organizations, including key players such as the United States, China, the European Union, and the United Kingdom. Alongside this declaration, complementary initiatives were announced aimed at reducing disparities in access to AI:
*Charter for the Democratic Diffusion of AI: signed by 22 countries, it seeks to promote more equitable access to computing power and models, preventing their concentration among a few actors.
*Alliance for Advancing Inclusion Through AI: in collaboration with UNICEF, it proposes guidelines to ensure that AI development does not exacerbate gender inequalities or exclude people with disabilities.

However, despite its ambition, the declaration has clear limitations. It is not a binding instrument, but rather a political roadmap lacking concrete mechanisms for implementation or accountability. Furthermore, the process of drafting it revealed a troubling shortcoming: civil society participation was limited, which excludes key voices from decisions that directly affect our communities.
In this context, I also want to mention Ecuador’s absence among the signatory countries. While other countries in the region, such as Costa Rica or Guatemala, chose to join these forums in search of cooperation and technology transfer, Ecuador did not have a visible presence; This not only reflects a disconnect from these multilateral processes but also raises questions about the direction the country is taking regarding AI governance: whether it prioritizes closed and opaque agreements, or whether there is a broader strategy to position itself in global debates where the rules of the game are being defined today.

Challenging Narratives from the Global South

Beyond the institutional and corporate presence in the Summit’s main halls, my participation focused on the critical spaces where the status quo of digital power is challenged and alternatives are proposed for the Majority World.

Governance from the Global South: From Risk Mitigation to Structural Foundations

I participated in the roundtable discussion AI Governance from the South: Redlines to Baselines, a key side event organized by IT for Change in collaboration with partners from the Global Digital Justice Forum (Tech Global Institute, Data Privacy Brasil, Derechos Digitales, Research ICT Africa, among others). During this session, the discussion took a necessary turn: we moved beyond merely talking about “mitigating harm” or identifying harmful use cases to focus on structural issues. We discussed the need to establish baselines and draw non-negotiable red lines throughout the entire AI value chain.
This entails:
* Redefining the political economy: It is not enough to regulate AI once it has already been deployed; governance must intervene in innovation itself, public procurement, and competition laws to prevent monopolies.
* Centrality of labor and the environment: Labor regulation and environmental oversight cannot be footnotes. They are basic conditions for any AI system to be compatible with human rights.
You can read the final agreement here.

Reimagining the Public Value of Broadcasting in the AI Era

I had the honor of participating in this panel moderated by Alison Gillwald (Executive Director of Research ICT Africa), sharing the stage with leaders from South Africa, India, and other regions. At this forum, we addressed the critical intersection between data, local languages, and power. From a Latin American perspective, I emphasized that in the era of large language models (LLMs), our communities are being treated simply as “data mines.” My remarks focused on community governance: it is imperative that communities not merely serve as providers of input to train foreign models, but rather as actors with real decision-making power over their cultural repositories, historical archives, and native languages.
You can watch the session here.

Multistakeholder Approaches to Participation in AI Governance (MAP-AI)
I took part in this incredible side event, which aims to strengthen the effective participation of multiple stakeholders—including governments, civil society, the private sector, and academia—in artificial intelligence governance processes, with a special emphasis on amplifying the underrepresented voices of the Global South. Through forums such as the India AI Impact Summit, the initiative promotes more inclusive models of participation, where civil society is not only present but also has real influence on decision-making.
This effort connects directly with the proposal of ReGenAI: A New Deal for the AI Economy, an alternative framework that proposes:
Economic Diversity: Reorienting AI toward models that strengthen local capacities rather than solely benefiting Big Tech.
Public Value and Reciprocity: Ensuring that the economic value and knowledge generated by AI tangibly flow back to the communities that produced the original data.
Ecological Sustainability: A genuine commitment to reducing the carbon footprint and the extractive impact of technological infrastructure.

Conversations like these demonstrate that the Global South is not merely participating in the AI debate we are a key player, redefining it, and demanding a technological architecture that does not reproduce the colonial hierarchies of the past.
You can find more information here.

The AI Impact Summit made it clear that AI is the strategic infrastructure of the 21st century. However, it also highlighted the tensions between openness and the concentration of power. We must continue to join forces so that we can envision the world we want to live in and how we want AI to be; otherwise, Artificial Intelligence will continue to be a force that accelerates and deepens the inequalities we already know.

Ready for the AI Summit in Geneva in 2027 :)
Above all, I’m taking with me the incredible people I met in India. At times like this, the networks we build matter more than ever.
Return ⇐

Get in touch

We would love to hear from you and keep you updated on our latest initiatives and projects. Schedule a call or fill out the form.  
BOOK A CALL
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Designed and Developed by Diversa
© 2025