Added to this technical complexity is another dimension that is often overlooked: AI is not just software. It is a global ecology that depends on deeply unequal flows of resources, labor, and knowledge [2]. Much of current governance focuses on “downstream” impacts: the safety of end users, harmful content, and immediate risks. However, the most invisible and often more structural impacts occur “upstream”: mineral extraction, intensive energy and water consumption, or precarious content moderation work, concentrated in the Global South. Ignoring these layers limits governance to the surface of the problem.
Effective governance must, therefore, adopt a systemic approach: recognizing AI as a global socio-technical infrastructure and moving toward governance models that are also pluriversal—that is, capable of adapting to different ecological, cultural, and political contexts.
The Transparency Gap
Despite efforts to promote transparency, there is a growing crisis of trust. Corporate reports on AI models are increasingly limited in key areas such as environmental impact, bias, and working conditions [1][5]. In many cases, companies prioritize assessing reputational risks—such as content toxicity—while neglecting less visible but equally critical areas, such as privacy or the human working conditions behind these systems [4].
**
Toward Governance with Infrastructure**
If one thing is clear, it is that AI governance cannot continue to rely exclusively on corporate self-regulation or legal frameworks that are disconnected from technical reality. Structural changes are needed:
Public infrastructure that allows for independent auditing of the use of data, energy, and resources.
Legal safeguards (safe harbors) that incentivize transparency and the reporting of failures.
International cooperation to prevent competitive dynamics that ignore planetary boundaries.
Without these elements, governance will remain trapped between legal theory and technical opacity.
**
The Importance of Local Infrastructure**
Finally, any discussion of governance must address the issue of infrastructure. Technological sovereignty is not achieved through regulation alone, but through material capacity: local infrastructure, encryption systems, and execution environments that guarantee privacy and control over data [2]. In this context, Digital Public Infrastructure (DPI) emerges as a key component. It is not just about servers, but about ecosystems that enable the secure flow of information between citizens, governments, and organizations. This includes digital identity, interoperability, and systems that do not rely exclusively on global private actors. More than a technical challenge, it is a political commitment: to build AI that functions as a public good, and not solely as an engine of private accumulation.
Closing the Gap
Today, AI governance is at a turning point. We have well-developed regulatory frameworks and principles, and we are beginning to build the necessary technical tools. But current incentives continue to favor opacity. Closing this gap requires more than just new regulations. It requires rethinking governance as infrastructure: something that not only defines rules but also allows them to be verified, audited, and collectively transformed; otherwise, we will continue to depend on what big corporations decide to show us.
[1] Bommasani, R., Klyman, K., Zhang, D., & Liang, P. (2025). The Foundation Model Transparency Index: 2025 Report. Stanford Institute for Human-Centered AI (HAI). [https://crfm.stanford.edu/fmti/](https://crfm.stanford.edu/fmti/)
[2] Domínguez, V., et al. (2024). Towards a Sociotechnical Framework for AI Governance: Bridging the Gap Between Technical Metrics and Social Impact. Journal of Artificial Intelligence Research.
[3] Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2025). Open Problems in Technical AI Governance (TAIG): A Survey on Machine Unlearning and Membership Inference. arXiv preprint arXiv:2501.
[4] Luccioni, A. S., et al. (2025). Measuring the Openness of AI Foundation Models: A Critical Analysis of Corporate Reporting vs. Independent Audits. Proceedings of the 2025 Conference on Fairness, Accountability, and Transparency (FAccT).
[5] OpenAI. (2024). Internal Red Teaming and Social Impact Report: Safety Evaluations for Frontier Models. OpenAI Blog/Safety Reports.
[6] Stanford University. (2024). Open Problems in Technical AI Governance. Technical AI Governance (TAIG) https://taig.stanford.edu/reports/open-problems-2024.pdf