Who really owns the future of AI?
TLDR:
India hosted one of the world’s largest AI gatherings, with 200,000+ participants, 20+ heads of state scheduled, 600+ startups, and 300+ pavilions across 70,000 sq. metres.
India is scaling public AI compute to 58,000+ GPUs, pricing access at ₹65/hour, subsidising startups, and backing indigenous foundational models with over ₹10,000 crore in mission funding.
Private capital is moving fast: ₹10 lakh crore AI infra pledges, 100 MW to 1 GW data centre plans, and $50 billion global diffusion commitments announced during summit week.
India formally joined Pax Silica, a US-led alliance to secure AI and semiconductor supply chains, linking minerals, chips, and AI into one geopolitical stack.
Behind the speeches about inclusion and democratization lies a harder story: compute sovereignty, chip security, and a global race to control the infrastructure of intelligence.
The Bite:
For five days, New Delhi did not look like a policy capital. It looked like the control room of the future.
Inside Bharat Mandapam, at the India AI Impact Summit 2026, school students were testing AI coding tools in one hall, founders were pitching crop-risk prediction models in another, and policymakers were debating deepfake laws across the corridor.
Over 200,000 participants passed through the venue. Organisers said participation ran into more than 100 countries, with some counts going up to 118. More than 20 heads of state were scheduled to attend. Around 60 ministers were in the building. Over 500 global AI leaders flew in.
The expo floor stretched across nearly 70,000 square metres. Thirteen countries had their own national pavilions. More than 600 startups were showcasing products. There were also reports that roughly 2.5 lakh attendees, mostly under 30, moved through the expo zone over the week, which tells you exactly who the summit was trying to speak to.
This wasn’t a closed-door think tank session. It was a public display of ambition.
And the messaging was consistent throughout the week: AI should not belong to just three or four countries. It should not belong to just five companies. It should not belong to a handful of labs sitting on endless GPU clusters.
The summit deliberately moved the conversation away from “existential AI doom” and toward “AI at population scale.” Earlier global AI summits had focused heavily on frontier risks, catastrophic misuse, and advanced model safety. Delhi shifted the lens. The question here was simpler and more immediate: how does AI work for 1.4 billion people?
That shift changed the tone of the entire event.
Instead of asking whether AI might one day become too powerful, panels asked how AI could reduce farmer credit risk, detect early-stage tuberculosis, personalise education for rural students, or improve government service delivery. Instead of focusing on speculative long-term risks, the emphasis was on near-term deployment.
There were also visible efforts to bring citizens into the narrative. A Guinness World Record was set during the week after over 250,000 people pledged responsible AI usage within 24 hours. The initial target had been just 5,000. That scale of public participation was deliberate. It was meant to show that this conversation was not just happening on stage, but across the country.
Throughout the week, more than 300 formal side meetings were reportedly held between governments and industry leaders. Delegations discussed AI governance standards, semiconductor investments, mineral supply coordination, and cross-border data frameworks.
By the time discussions moved toward a joint declaration, the direction of the summit had become clear. The focus was not only on AI innovation, but on who shapes the rules around it. Conversations repeatedly returned to access, standards, and infrastructure. The underlying message was that emerging economies want a greater role in setting global AI norms, and that large-scale deployment across developing nations must be treated as seriously as frontier model development.
That broader picture makes what happened on the sidelines more significant.
India formally joined Pax Silica, a US-led alliance launched recently to secure AI and semiconductor supply chains. The agreement links artificial intelligence to critical minerals, chip manufacturing, logistics, and advanced hardware ecosystems. In practical terms, it connects AI ambitions with the upstream infrastructure required to sustain them.
To understand what really happened in Delhi, you need to peel back the layers.
Often, artificial intelligence was described as a civilisational turning point. A technology that could transform agriculture yields, democratise healthcare diagnostics, enable multilingual education, and accelerate financial inclusion for billions. It was less about doomsday scenarios and more about deployment at population scale.
All the talk about welfare and democratization sounds great. But the real plot twist sits inside the data.
In 2024, the government approved a national AI mission with an outlay of ₹10,371.92 crore over five years. The allocation is not limited to academic research. It covers public AI compute infrastructure, startup funding support, curated datasets platforms, development of indigenous foundational models, and a “Safe & Trusted AI” component focused on guardrails and responsible use.
Right now, the country claims access to over 38,000 GPUs under its mission framework. Another 20,000 GPUs are expected to come online shortly, taking total compute capacity past 58,000 GPUs. Access pricing has been pegged at roughly ₹65 per hour, with up to 40% subsidy for startups.
Compute capacity simply means how much raw processing power is available to run or train artificial intelligence systems.
Training large AI models can cost millions of dollars in compute alone, this is a direct attempt to level the playing field.
Think about it.
Until recently, cutting-edge AI development was concentrated in a handful of global labs backed by hyperscalers. Access to GPUs was limited. Cloud credits were scarce. Training costs were prohibitive. If you were not in Silicon Valley or closely aligned with it, you were already behind.
By pricing compute at ₹65 per hour and subsidising usage, India is effectively saying that AI infrastructure cannot remain gated behind corporate walls. It is trying to treat compute as public infrastructure, similar to how digital payments were turned into public rails through UPI.
But GPUs do not run on ambition. They run on silicon. And silicon runs on minerals.
That is where Pax Silica enters the picture.
The alliance seeks to secure the entire AI technology stack, from critical mineral extraction to advanced semiconductor manufacturing and logistics. Rare earth elements, lithium, cobalt, gallium, high-purity silicon, fabrication plants, and packaging facilities all sit upstream of AI models. If any link in that chain is disrupted, everything downstream stalls.
For years, supply chains in advanced manufacturing have been concentrated geographically. That concentration creates vulnerability. Export controls, sanctions, geopolitical tensions, or sudden regulatory shifts can disrupt access to chips and materials overnight.
By joining Pax Silica, India signalled alignment with a bloc attempting to reduce dependency risks in technology supply chains. It plugged itself into a coalition that includes advanced economies and strategic partners seeking resilience across chips, minerals, and AI infrastructure.
India has already committed ₹76,000 crore under its Semiconductor Mission to position itself as a chipmaking hub. Now, with AI compute scaling to 58,000 GPUs and a semiconductor incentive structure in place, the software and hardware stacks are beginning to align.
Of course, compute power alone does not build useful AI systems. Models need data to learn from. That is where the next layer of the strategy comes in.
India has curated more than 7,500 datasets under a national platform called AI Kosh. These datasets span agriculture, healthcare, governance, climate, language, and public services. The idea is to give researchers and startups structured, high-quality data that can be used to train AI systems relevant to local problems.
At the same time, twelve teams have been shortlisted to build what are called foundational models. The list includes Sarvam AI, along with academic institutions and industry consortiums working on language-first large models. These are base systems that can later power everything from government chat interfaces to enterprise tools. Combined public funding support for these teams crosses ₹1,000 crore.
This focus on local models comes from a fairly simple problem. Most large language models today are trained primarily on Western internet data. They perform well in English and other dominant global languages, but often struggle with regional nuance, dialects, cultural references, and administrative frameworks outside those ecosystems.
India has 22 official languages and hundreds of dialects. If AI is expected to power public service delivery, financial access, healthcare triage, legal documentation, and citizen interaction at scale, it cannot operate only in polished English.
If a farmer in Vidarbha or a small business owner in Coimbatore cannot interact with AI in their own language and context, the technology does not democratise anything. It simply shifts exclusion into digital form.
That is why indigenous foundational models are being funded, that understands Indian names, legal terminology, regional slang, and government schemes. In practical terms, AI infrastructure is not only about having the largest cluster of GPUs. It is about ensuring that the intelligence running on those chips reflects the society it is meant to serve.
But while public infrastructure expanded, private capital made even louder announcements.
Reliance Industries pledged ₹10 lakh crore over seven years toward building AI capacity, positioning the investment as foundational infrastructure for the next phase of digital growth. A strategic partnership between the Tata Group and OpenAI outlined plans to build 100 megawatts of AI infrastructure in India, with the ability to scale that up to 1 gigawatt over time.
Google, meanwhile, reiterated that it is on pace to invest $50 billion globally by the end of the decade to expand AI access and address what it described as the growing AI divide.
Data centres measured in megawatts signal industrial-scale ambition. A 1 gigawatt AI infrastructure build-out implies massive energy planning, cooling systems, land allocation, and grid stability. It also implies that AI workloads are no longer fringe compute tasks. They are core industrial loads.
All of this infrastructure, however, depends heavily on one thing: energy.
Large-scale AI data centres consume significant electricity and water. While lightweight AI models can reduce energy consumption by up to 90% compared to massive architectures, the aggregate demand continues to rise. Sustainability, renewable energy sourcing, and grid upgrades become central to the AI conversation.
All of this expansion also brings up a more practical question. What does this mean for jobs?
AI is no longer confined to research labs. Agentic systems and generative tools are already being deployed across finance, customer support, legal services, content creation, and software development. Tasks that once required junior analysts, paralegals, or support executives can now be completed in seconds by a model trained on large datasets.
This concern surfaced repeatedly during the summit discussions. While the dominant narrative leaned toward augmentation rather than replacement, the numbers explain why the question refuses to go away. India’s services sector employs millions in IT, business process outsourcing, finance, and back-office operations. These are precisely the kinds of roles most exposed to automation.
Even so, the economic tension is real. If AI systems can draft contracts, review financial statements, generate marketing content, or write production-level code, the productivity gains are obvious. But so is the potential for workforce compression in white-collar segments.
For markets. India’s export-driven IT and BPO ecosystem is worth well over $200 billion annually. If automation reshapes billing models, headcount structures, or pricing power, the ripple effects extend beyond individual companies into macro growth projections.
So while the summit celebrated scale, inclusion, and infrastructure, the labour equation quietly sits underneath it all. Growth from AI is heavily dependent on how smoothly the workforce transitions alongside them.
That is also why the summit placed so much emphasis on positioning India as a serious AI ecosystem rather than just a large consumer market.
The summit attempted to reposition India as one of the top three AI ecosystems globally in research, talent, and adoption. It hosts over 2,975 Global Capability Centres for multinational firms. Roughly 65% of its population is under 35. Internet users exceed 750 million. Smartphone penetration is deep. Digital payment rails are ubiquitous.
In other words, the demand side is ready.
The question is whether supply-side execution keeps pace.
So where does this leave the market?
If AI infrastructure becomes part of national capability, then export controls, supply chain disruptions, and diplomatic alignments start influencing valuation models. Access to chips, access to power, and access to talent become strategic variables, not operational footnotes.
At the same time, the opportunity is equally broad. A young, connected population, deep digital payment rails, and one of the world’s largest pools of technical talent create a rare combination of demand and capacity. Whether that translates into durable advantage will depend less on announcements and more on execution over the next few years.
What the summit ultimately made clear is this: India is no longer content to consume AI built elsewhere. It is trying to architect its own layer of the future. And in a world where intelligence runs on silicon, alliances like Pax Silica suggest the race is not just about building smarter machines. It is about building the ecosystem that keeps those machines running, at scale, without being forced to depend on someone else’s supply chain.
If you’ve made it this far, thanks for reading. We’ll be back next week, like clockwork.
Got a company, sector, or story you think we should dig into? Hit reply and tell us.
If we pick your suggestion, we’ll send some Filter Coffee merch your way.
Coffee Crew out.