The nation’s first digital public infrastructure (DPI) network of interoperable data centers is beginning to take shape thanks to the in-principle acceptance of two large cloud providers, Oracle and another AI infrastructure provider.
People+ai, a project of the EkStep Foundation that Nandan Nilekani co-founded, was launched last year to meet the nation’s growing compute demand, which is growing as a result of AI. The Open Cloud Compute (OCC) initiative seeks to create an open network where investors can join the project, new players may establish mini data centers, and customers can find them.
People+ai’s head of strategy, Tanvi Lall, said that Neev Cloud and Oracle have given their in-principle approval for the company to join the network.
The chief executive of Indore-based NeevCloud, an AI infrastructure startup developing a super cloud, Narendra Sen, said that NeevCloud’s proprietary liquid immersion cooling technology will be used to support people+ai. This enables them to support powerful GPUs.
In addition to these companies, people+ai has communicated with a sizable services provider for electronics manufacturing that has a significant demand for data storage.
According to Lall, “running a few use cases on a decentralized network of open cloud compute for supply chain, logistics management, and data intelligence will work for them.”
Oracle remained silent.
On March 7, the government gave the IndiaAI Mission an allocation of more than Rs 10,372 crore. The IndiaAI Compute Capacity, which aims to build a state-of-the-art, scalable AI computing infrastructure by deploying over 10,000 Graphics Processing Units (GPUs) through clever public-private partnerships, is a key component of this ambition.
Speaking on the fringes of an event on March 8, S Krishnan, the secretary of electronics and IT, announced that the GPUs sanctioned under the IndiaAI Mission will be made accessible in the next 18 to 24 months.
He continued by saying that the government will cover the viability gap for this computing infrastructure and welcome industry bids under the mission.
Globally, computer hardware is a limited resource. India is trailing behind the US, China, and UK in the AI race. Startups with limited funding have been pleading with the government to provide computer infrastructure so they can participate in the AI race and have access as well.
According to people+ai, the network is ready to provide both local and international clients with a sustainable model that can be expanded and applied throughout India.
This model states that 10,000 mini-data centers are required in India to increase its computational capability. By bringing compute capacity closer to users, this network would strengthen data sovereignty, enable faster processing, and reduce latency.
The quantity of server and storage resources that the databases in an instance have access to is known as compute capacity. One node is equivalent to 1,000 processing units. When you create an instance, you can specify its compute capability as multiple processing units or as several nodes.
According to Lall, at the Global Technology Summit in Delhi in December of last year, the company’s paper outlining a network for open cloud compute gained significant traction among Asia Pacific tech executives and world policymakers.
“NeevCloud, who is attempting to create a domestic hyperscaler, and we are in talks. They are attempting to put up 40,000 GPUs on a large scale. We also consider Oracle Cloud infrastructure to be a thought partner. Since the network is not yet operational, none of them—including Vigyanlabs in Mysuru—have been onboarded,” she said.
Srinivas Varadarajan, CEO of Vigyanlabs, presented the company’s efforts toward constructing a sustainable micro data center in Mysuru during a recent workshop, according to people+ai. On November 17 of last year, ET published a story citing Varadarajan to say that he supports the concept of a network of micro-data centers in theory.
In partnership with Protean eGov Technologies, Vigyanlabs operates two data centers in Pune and Mysuru.
These companies have granted in-principle clearance to join the network and are assisting people+ai in designing the open network from the supply side.
Micro data centers are at the heart of NeevCloud’s people+ai initiative to enable AI infrastructure throughout the nation, as Sen of NeevCloud told ET.
“Air conditioning systems are used to cool typical data centers. Sen stated, “They are either dry coolers or water-based coolers.”
According to him, water-based coolers need more water, and dry coolers need more electricity. This approach is not viable in a hot, tropical nation like India, where there are areas with a shortage of water, he claimed. Even more power-hungry dry coolers are not sustainable.
Eight kilowatts per rack is the power density needed by a CPU for general-purpose computing. Ten kilowatts are needed for one GPU server. Because of the eight-kilowatt density, we can fit at least one server in a single rack. Thus, we immerse every server in coolants,” he declared.
Coolant completely eliminates heat since the servers are submerged. Comparing our method to traditional air conditioning reduced cooling requirements by 70% to 80%. An additional 10 kW is needed to cool a 10-kW server using the conventional method. Two kilowatts is all we need to cool. We’re going to give everybody this plus AI,” he said.