/ciol/media/media_files/2025/12/29/swastik-chakraborty-2025-12-29-13-50-13.png)
India’s AI narrative has largely been shaped by models, data, and startup momentum. But as artificial intelligence moves from experimentation to mission-critical deployment, a more fundamental constraint is coming into view: compute infrastructure.
High-performance servers, AI clusters, and private cloud systems now determine how securely, independently, and at what scale AI can be deployed. For a country that generates massive volumes of data but remains dependent on imported systems, compute is no longer a backend concern. It is a strategic capability.
In this conversation with CiOL, Swastik Chakraborty, Vice President of Technology, Netweb Technologies, explains what sovereign compute means in practice, where Indian OEMs can differentiate despite global dependencies, and how policy, enterprises, and manufacturers must align to build the infrastructure powering India’s AI future.
What should truly define sovereign compute for India is it about local assembly, or full-stack capability across chips, interconnects, firmware, and software?
Sovereign compute for India cannot be defined by local assembly alone; it must be rooted in full-stack capability across chips, interconnects, firmware, and software. As India charts its own path, replicating imported models of compute sovereignty will neither deliver long-term strategic autonomy nor meet the scale of national AI ambitions. Today, India contributes less than 2% of global AI compute capacity, but the trajectory is shifting rapidly. Driven by the Government of India and forward-looking private enterprises, the country has already deployed over 80,000 GPUs across public and private ecosystems as of mid–2025 - an unprecedented expansion of AI compute.
To truly attain sovereign compute, India must evolve from system assembly to controlling critical layers of the infrastructure stack. This requires a coordinated national framework anchored on five pillars: Domestic compute infrastructure and nationally governed data centers, Semiconductor and advanced packaging capability, Data sovereignty and governance, Indigenous talent pipelines across hardware and AI systems,and Robust legal compliance,
Netweb is deeply aligned with this national mission. By co-developing Make in India GPU servers, accelerators, and full-stack AI systems, in partnership with global technology leaders, we are helping Indian startups, research institutions, and enterprises access trusted, sovereign, and high-performance compute to build the next generation of world-class AI applications for all kinds of citizen-centric services.
Given that most AI hardware still depends on imported GPUs and accelerators, where can Indian OEMs realistically add differentiated value in the short term - system design, cooling, board-level IP, or packaging?
While India continues to rely on imported GPUs and accelerators, Indian OEMs are rapidly moving up the value chain by adding differentiated value in system design, advanced thermal engineering, board-level IP, and ATMP capabilities. Over the past few years, India has made strong progress in semiconductor and system design, supported by government-led initiatives such as the Design Linked Incentive (DLI) Scheme and the India Semiconductor Mission (ISM). These efforts have enabled Indian design centers to contribute to advanced chip architectures, including work on 3nm and next generation 2nm designs.
On the ATMP (Assembly, Testing, Marking & Packaging) front, India is strengthening its footprint through ISM-backed investments, with large-scale projects such as Micron’s ATMP facility in Sanand, Gujarat, and Tata Semiconductor Assembly and Test in Morigaon, Assam. These facilities mark a significant step toward establishing a robust backend semiconductor ecosystem.
Netweb is strongly aligned with this national direction. In May last year, we inaugurated India’s flagship end-to-end high-end computing server, storage, and switch manufacturing facility in Faridabad, featuring PCB design, manufacturing, and SMT lines. With in-house capability to design and manufacture 16-24 layers complex motherboards, we deliver complete system-level innovation - spanning design, manufacturing, integration, and deployment of high-performance servers, storage, and AI systems.
How should enterprises evaluate the trade-off between proven global systems and emerging local alternatives, especially in sectors like defense, BFSI, and public research that prioritize data sovereignty?
In today’s geopolitical climate, data is not merely an economic resource - it is a strategic national asset. As highlighted in national policy discussions, technology is increasingly being weaponized, with compute infrastructure and AI models often serving as leverage in global negotiations. Sectors such as defense, BFSI, public research, and citizen services therefore cannot afford to rely solely on global systems whose architecture, firmware, and data pathways remain opaque and non-auditable.
The performance differences between global and local systems are not too stark since the latter still consists of chips from global vendors with other components being sourced from local vendors. On the flip side, global systems carry inherent risks: data exfiltration, policy leakage, dependency on closed-source models, and the possibility of our own data being used to build powerful close-sourced models. India may end up using AI services “powered by our data but owned by others.” Enterprises must therefore evaluate systems not merely on benchmarks or brand familiarity, but on four sovereignty parameters: Data residency & auditability. Firmware & supply-chain trust, Compliance with national AI & cybersecurity mandates, Long-term strategic autonomy.
In this context, emerging Indian alternatives offer a compelling strategic advantage. They ensure full transparency of the stack, align with India’s sovereign AI mission, and mitigate geopolitical risk while enabling on-prem, controlled, and policy-compliant compute.
Netweb plays a critical role in enabling this shift. We design and build, trusted-firmware systems, and Make-in-India GPU/AI servers tailored for sensitive national workloads. Our systems provide auditability, control, and localized performance that high-trust sectors need while still delivering enterprise-grade reliability comparable to global alternatives.
For enterprises prioritizing sovereignty, security, and data integrity, the trade-off is clear: global systems offer performance; Indian systems like Netweb’s offer performance plus protection.
What institutional gaps such as testing infrastructure, certification labs, skilled manpower, or financing most limit India’s ability to build and deploy large-scale AI compute systems domestically?
We are not constrained by institutional gaps such as testing infrastructure, certification labs, or regulatory frameworks. Those foundations for building and validating advanced systems already exist and are steadily strengthening. The real bottleneck today is the pace and depth of AI adoption, especially across government and large enterprises.
The cost of not investing in AI will soon be far higher than the cost of investing in it. What we need is faster decision-making, bolder experimentation, and a clear mandate to embed AI into core public services and mission-critical enterprise workloads - not just pilot projects at the edges.
On the talent side, India already has a strong AI workforce and over 1,400 startups pushing innovation across sectors. The gap is not volume of manpower, but specialized expertise: high-speed PCB design, advanced chip packaging, secure firmware and system-level optimization. These skills are essential to move from “assemble and integrate” to “design, optimize, and innovate” in large-scale AI compute systems.
In short, building and deploying domestic AI compute infrastructure is a marathon, not a sprint. India has the labs, the policy intent, and the entrepreneurial base. The next leap will come from accelerating AI adoption in government and industry, while deepening niche hardware and systems skills that turn our existing ecosystem strengths into globally competitive AI infrastructure.
Financially, the IndiaAI Mission’s ₹10,300 crore allocation and USD 11.1 billion in private AI investments (2013–2024) form a strong foundation, but sustained public–private capital is vital to build GPU clusters and semiconductor back-end facilities.
What kind of government policy or procurement frameworks could accelerate indigenous compute adoption while avoiding inefficiencies or protectionist barriers?
India's digital paradox is surprising: while producing 20% of global data with only 3% of data center capacity, the nation faces a looming infrastructure crisis. With 5G penetration expected to reach 88% by 2027 and AI adoption accelerating across sectors, data consumption could triple within years. Currently, a dominant percentage of Indian data flows abroad for processing - creating sovereignty risks, latency issues, and economic leakage worth billions annually.
Government policies and procurement can accelerate indigenous compute without sliding into protectionism by making sovereign-by-design mandatory for sensitive workloads, with a defined share of compute and storage on India-based, India owned Infrastructure. There can also be extended preferential procurement to Make-in-India AI Servers and storage that meet open standards, performance and security benchmarks.
Furthermore, there can be structured multi-year framework contracts from ministries and PSUs to provide predictable, anchor demand for domestic providers. The aim is not isolation, but rebalancing – so more of India’s data and AI runs on secure, competitive, indigenous infrastructure.
Looking ahead five years, what’s a realistic outcome for India’s compute industry becoming a regional supplier for sensitive workloads, a large-scale assembler for global data centers, or a developer of its own compute IP? What will it take to get there?
Five years from now, India’s compute industry is poised to emerge as a strategic regional hub for AI and high-performance computing. The country is already advancing toward building its first semiconductor fabrication plant, a milestone that will mark the beginning of true end-to-end “Make in India” chip production. Once this foundation is laid, key enablers- skilled human capital, robust financing mechanisms, and access to diverse, high-quality datasets - will strengthen organically. Global confidence in India’s digital infrastructure is reflected in the surge of data center investments, with capacity exceeding 1.5 GW in 2025 and projected to reach 8–9 GW by 2030, driven by AI adoption, cloud computing, and stringent data localization policies.
India’s rapid progress in AI research, chip design, and system integration suggests that the nation will evolve beyond being a large-scale assembler. Instead, it will emerge as a regional supplier of sovereign compute for sensitive workloads and a co-developer of next-generation AI hardware and IP, aligned with the IndiaAI Mission and semiconductor roadmap.
In this transformation, Netweb is playing a key role in shaping India’s sovereign compute vision. Through deep collaboration with global technology leaders and active participation in national AI and HPC initiatives, Netweb is developing Make-in-India high-performance systems, Private cloud solutions such as Skylus, GPU orchestration platform such as Skylus.ai, AI infrastructure, and trusted compute solutions that power enterprise, research, and government workloads. By bridging indigenous design with world-class engineering, Netweb is helping position India as a credible global source of secure, scalable, and sovereign AI compute.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us