At India’s AI Impact Summit, OpenUK Makes the Case for Open Source Governance

At India’s AI Impact Summit 2026, OpenUK argues that open source is key to AI sovereignty, governance, and resilience, shaping how governments build and control AI systems.

author-image
Manisha Sharma
New Update
Amanda Brock

As India hosts more than 60 ministers and heads of state for the India AI Impact Summit 2026 in New Delhi, the global conversation on artificial intelligence is shifting from ambition to execution. At the centre of this shift is a growing recognition that AI sovereignty will not be achieved through compute scale alone, but through open, resilient and collaborative technology ecosystems.

Advertisment

At the Summit, OpenUK is positioning open source and AI openness as strategic foundations for trustworthy, sovereign AI. With India emerging as a key voice in global AI governance, its participation highlights how open ecosystems can strengthen national capability, reduce dependency on dominant platforms and extend the benefits of AI more evenly across economies.

Speaking to CiOL on the sidelines of the Summit, OpenUK CEO Amanda Brock framed AI sovereignty not as isolation, but as capability, the practical ability for governments and enterprises to adapt, deploy and govern AI on their own terms.

She pointed to open source as a proven model for transparency, accountability and innovation, arguing that openness allows countries to build AI systems that are auditable, interoperable and locally adaptable, rather than locked into a small set of global vendors.

OpenUK also highlighted India’s Digital Public Infrastructure journey as evidence that open standards can scale nationally and travel globally. As India exports this model, open AI ecosystems are increasingly seen as critical to long-term resilience, accessibility and economic inclusion.

A key panel at the Summit brought together global voices, including Jimmy Wales, Tony Blair Institute’s AI leadership, India’s EkStep Foundation and European AI startup Pleias, examining how open source can strengthen resilience across public and enterprise AI systems.

Following the Summit, OpenUK has published its AI Openness at the Impact Summit report, capturing insights and policy signals emerging from New Delhi. With the UK remaining India’s second-largest partner in open-source collaboration, the organisation sees strong momentum for deeper bilateral and multilateral cooperation on open AI.

Advertisment

Interview excerpts

Open source is often positioned as the counterbalance to concentrated AI power, but can community-driven ecosystems realistically compete with the capital intensity and compute dominance of frontier proprietary models?

In recent times much of the open source we have seen in AI has begun life by being “thrown over the wall”. This is the term we used in open source to describe it when a completed deliverable in a supposedly final format is provided to the world. The completed product is shared rather than being developed in the open with community engagement. We see some very notable exceptions to that in projects like the agentic AI, AutoGPT.

Community ecosystems will bring an incredible level of engagement and increased innovation. As with open-source software, for the infrastructure delivered by these communities to be sustainable, there is a need for maintenance funding and building out the environment and landscape for the ecosystem to thrive. Examples would be a national code and an IP-holding foundation or entity. This has been one aspect of the strategy China put in place to support its development and growth in open source over the past decade, and these pieces are necessary to complete the jigsaw of success.

If countries pursue sovereign AI strategies, does open source genuinely enhance autonomy, or does it simply shift dependency from corporations to globally distributed code communities?

Open source offers the opportunity to engage in global collaboration and, on a local level, to enable sovereign AI training and model development. When that’s done, open source is critical to enable the democratisation of AI on the local level and, of course, to increase competition through innovation. That iterative development in open source has been at the base of China’s sovereign success. They are the only nation to have really achieved this. Funding for this will also be something we see managed between state and enterprise, with public-private initiatives and also with state funding underwriting investment, which we have already seen. Being open source doesn’t mean that there aren’t also business opportunities.

We have to make sure that where there are communities contributing, that funding is provided across the projects at an appropriate level and that the right people are getting this, along with the environment for success being in place.

Advertisment

This will increasingly be made up of collaborative governments, likely the middle nations who are the second tier of AI success at the moment. Collaborating at certain levels in the stack to create de facto standards and to reduce individual nations’ costs and enable bigger development through collaborative funding.

As enterprises integrate open AI into core systems, where does transparency strengthen security, and where might openness increase systemic vulnerability or compliance complexity?

Transparency simply strengthens trust. Of course open source comes with security challenges. All AI comes with security challenges, and the difference with open source is that these are quickly surfaced and can be collaboratively solved. As time passes we will see trust building in these open systems.

Advertisment

India’s Digital Public Infrastructure has scaled successfully through open standards, but can similar openness in AI models be embedded safely into critical public systems without amplifying risk?

We need to see the language of AI include an ontology and to find agreement on our understanding of terms. Many are already defined with standards like digital public good. We need to be sure that we are all clear and not talking at cross purposes. For us to ensure risk is managed, clarity of terminology and conversation is essential. Standards like MCP that are open and held in trust in the open will matter more and more. We are also likely to see more de facto standards and less formal standards weighed down by the slow pace of the standards bodies.

Who ultimately finances and governs large-scale open AI ecosystems, and how do we prevent hidden corporate influence from shaping supposedly neutral community infrastructure?

Advertisment

There needs to be a shift to internationally collaborative governments and enterprises funding open-source models and the AI ecosystem, including tools. I expect this will be a balance between philanthropy, enterprise, and the state, with one keeping the other in check.

Should open-source AI systems be subject to the same regulatory scrutiny as proprietary frontier models, or does openness justify differentiated accountability frameworks?

Open-source AI systems and openness generally are different. If it is created by communities and used at no cost, then it is simply not appropriate to expect the creator to be liable. For liability to exist, money must change hands. This has long been the case in software and will also be the case in AI. I expect in the not too distant future we will begin to see risk management tools and products for AI. Simple open-source tools already exist and are part of the Delhi Declaration. This tools-not-rules approach is critical.

We are also going to see a shift back to a focus on governance, which had disappeared in Delhi, as we head into Switzerland in 2027.