Technology

India hosts global AI summit in New Delhi

Leaders tout openness while policy fights center on compute and data residency, AI governance risks becoming cloud nationalization by licensing

Images

World leaders discuss AI future at India’s global summit in New Delhi World leaders discuss AI future at India’s global summit in New Delhi aljazeera.com
OpenAI’s Altman tells leaders regulation urgently needed OpenAI’s Altman tells leaders regulation urgently needed dhakatribune.com

World leaders and technology executives arrived in New Delhi to discuss AI’s future at India’s global summit, with the usual ceremonial blend of “openness” and “guardrails” masking a more concrete struggle: who controls compute, who controls data, and who gets to issue the permits. Al Jazeera reports the summit focused on governance, innovation, and the international coordination needed as AI systems spread across economies.

India’s pitch is increasingly consistent: it wants to be an AI superpower without becoming a mere customer of US cloud platforms or a dependency in China’s hardware ecosystem. That ambition sounds like market building. It often translates into policy that makes “sovereignty” synonymous with domestic chokepoints.

The key battlegrounds are not philosophical. They are infrastructural.

First is compute. Large-scale model training and deployment are constrained by datacenters, power allocation, advanced GPUs, and networking. When governments talk about “strategic autonomy,” they are usually describing preferential access to scarce megawatts and imported chips — administered through licensing, partnerships, and politically blessed procurement.

Second is data residency. Requirements that certain categories of data remain onshore are sold as privacy and security. But they also function as an industrial policy instrument: forcing foreign vendors to build locally, partner locally, and submit to local audit. This is how you nationalize the cloud stack without using the word “nationalize.”

Third is governance-by-registration. Model registries, mandatory safety evaluations, and compliance reporting can be legitimate in narrow contexts. But the default outcome is predictable: the firms best able to comply are the largest incumbents, and the state gains a lever over which models can be deployed, by whom, and under what conditions. Innovation becomes a permissioned activity.

Al Jazeera’s account of leaders debating AI’s future fits a broader global pattern: every state wants “open” AI — preferably open to its own champions — and “safe” AI — preferably safe in ways that require continuous monitoring and centralized enforcement.

For a country with India’s entrepreneurial energy, AI governance risks becoming a tariff wall for services: a domestic compliance industry that blocks small competitors while inviting a handful of large vendors into a tightly supervised market. The summit’s language may celebrate democratizing AI, but the implementation details — compute allocation, residency mandates, and licensing — determine whether India gets a competitive ecosystem or a regulated utility.

The political class will call it “responsible AI.” The market will recognize it as the newest form of gatekeeping, now with GPUs.