OpinionMAR 2026

By Diana Mosquera

The State of AI Governance: From Theoretical Ethics to Technical Reality

In recent years, the governance of artificial intelligence has made significant progress in the formulation of ethical principles, regulatory frameworks, and multi-stakeholder dialogue forums. However, these advances coexist with some increasingly evident gaps: there is a deep divide between what is promised in terms of transparency, accountability, and fairness, and what can actually be verified in practice [1][4]. This gap is not accidental. It stems from a structural limitation: we are attempting to govern systems that we do not yet fully understand.

Governance Beyond Regulation

AI governance cannot be reduced to a list of prohibitions or regulatory principles. Regulating without understanding is, at best, inefficient; at worst, a form of symbolic bureaucracy.
True governance begins with a deep technical understanding: mapping the architecture of the models, their training data, their biases, and their inference logic [2]. Without this foundation, regulation lacks any real capacity for intervention. Governance implies the ability to audit. It implies having the tools to open what is currently presented as a “black box.” In this sense, AI governance is not merely a political or legal issue. It is also an engineering challenge.

The Challenge of Technical Governance

For governance to be effective, we need a technical infrastructure that remains incomplete today. This emerging field known as Technical AI Governance (TAIG) raises some of the most critical issues [6][3]:
How can we verify that a model has not been trained on sensitive or protected data if we do not have access to the dataset?
How can we ensure that a system can “forget” specific information without compromising its performance?
How can we track where and how computing power is being used in a context of geopolitical tensions?
Solutions such as membership inference attacks, machine unlearning, or hardware geolocation aim to close these gaps. But we are still far from having mature tools that allow regulators, researchers, or civil society to independently audit these systems.

AI as a socio-technical ecology

Added to this technical complexity is another dimension that is often overlooked: AI is not just software. It is a global ecology that depends on deeply unequal flows of resources, labor, and knowledge [2]. Much of current governance focuses on “downstream” impacts: the safety of end users, harmful content, and immediate risks. However, the most invisible and often more structural impacts occur “upstream”: mineral extraction, intensive energy and water consumption, or precarious content moderation work, concentrated in the Global South. Ignoring these layers limits governance to the surface of the problem.
Effective governance must, therefore, adopt a systemic approach: recognizing AI as a global socio-technical infrastructure and moving toward governance models that are also pluriversal—that is, capable of adapting to different ecological, cultural, and political contexts.

The Transparency Gap

Despite efforts to promote transparency, there is a growing crisis of trust. Corporate reports on AI models are increasingly limited in key areas such as environmental impact, bias, and working conditions [1][5]. In many cases, companies prioritize assessing reputational risks—such as content toxicity—while neglecting less visible but equally critical areas, such as privacy or the human working conditions behind these systems [4].
**
Toward Governance with Infrastructure**

If one thing is clear, it is that AI governance cannot continue to rely exclusively on corporate self-regulation or legal frameworks that are disconnected from technical reality. Structural changes are needed:
Public infrastructure that allows for independent auditing of the use of data, energy, and resources.
Legal safeguards (safe harbors) that incentivize transparency and the reporting of failures.
International cooperation to prevent competitive dynamics that ignore planetary boundaries.
Without these elements, governance will remain trapped between legal theory and technical opacity.
**
The Importance of Local Infrastructure**

Finally, any discussion of governance must address the issue of infrastructure. Technological sovereignty is not achieved through regulation alone, but through material capacity: local infrastructure, encryption systems, and execution environments that guarantee privacy and control over data [2]. In this context, Digital Public Infrastructure (DPI) emerges as a key component. It is not just about servers, but about ecosystems that enable the secure flow of information between citizens, governments, and organizations. This includes digital identity, interoperability, and systems that do not rely exclusively on global private actors. More than a technical challenge, it is a political commitment: to build AI that functions as a public good, and not solely as an engine of private accumulation.
Closing the Gap
Today, AI governance is at a turning point. We have well-developed regulatory frameworks and principles, and we are beginning to build the necessary technical tools. But current incentives continue to favor opacity. Closing this gap requires more than just new regulations. It requires rethinking governance as infrastructure: something that not only defines rules but also allows them to be verified, audited, and collectively transformed; otherwise, we will continue to depend on what big corporations decide to show us.

[1] Bommasani, R., Klyman, K., Zhang, D., & Liang, P. (2025). The Foundation Model Transparency Index: 2025 Report. Stanford Institute for Human-Centered AI (HAI). [https://crfm.stanford.edu/fmti/](https://crfm.stanford.edu/fmti/)
[2] Domínguez, V., et al. (2024). Towards a Sociotechnical Framework for AI Governance: Bridging the Gap Between Technical Metrics and Social Impact. Journal of Artificial Intelligence Research.
[3] Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2025). Open Problems in Technical AI Governance (TAIG): A Survey on Machine Unlearning and Membership Inference. arXiv preprint arXiv:2501.
[4] Luccioni, A. S., et al. (2025). Measuring the Openness of AI Foundation Models: A Critical Analysis of Corporate Reporting vs. Independent Audits. Proceedings of the 2025 Conference on Fairness, Accountability, and Transparency (FAccT).
[5] OpenAI. (2024). Internal Red Teaming and Social Impact Report: Safety Evaluations for Frontier Models. OpenAI Blog/Safety Reports.
[6] Stanford University. (2024). Open Problems in Technical AI Governance. Technical AI Governance (TAIG) https://taig.stanford.edu/reports/open-problems-2024.pdf