“The world’s most powerful AI models don’t just run on code — they run on megawatts.”
Google’s announcement on October 14 of a $15 billion AI hub in Visakhapatnam — including a gigawatt-scale data centre campus, subsea cable gateways, and renewable power infrastructure — is a landmark for India’s AI ambitions. But behind the headlines, a more fundamental question looms: where will all the electricity come from? AI at scale is not a digital hunger — it’s an energy demand.
In this blog, I explore the contours of this challenge: how much energy AI might consume in India, whether the grid can absorb it, what technical and policy steps are needed, and whether a “green AI” future is realistically within reach.
The scale of the problem: AI’s power appetite
From training to inference: the energy curve
- Training large models is extremely energy-intensive. A commonly cited figure: GPT-4’s training run may have consumed ~50 GWh in one continuous execution (accounting for all overheads).
- In India’s context, companies deploying clusters with thousands of GPUs estimate that a comparable training run might consume 30–70 GWh including overheads. Ten such clusters, if continuously active, would imply ~300–700 GWh. But since training is episodic (weeks or months), the average continuous load would be much smaller.
- Still, AI workloads are rapidly increasing their share of data-centre energy demand. GPU-based training, fine-tuning and inference can consume 5–10× more power per rack compared to conventional CPU-based workloads.
Globally, the data centre sector’s electricity demand is poised for massive growth:
- The International Energy Agency (IEA) projects data centre electricity demand will more than double by 2030, reaching ~945 TWh (roughly Japan’s current electricity consumption) — with AI as the main driver.
- Deloitte expects that by 2030, AI will form a larger chunk of data centre demand, necessitating efficiency upgrades and new power strategies.
- According to BloombergNEF, data centres may account for ~8.6% of all U.S. electricity demand by 2035 (from ~3.5% today) — a signal of how AI and compute scaling can rewire power markets.
In India, the predictions are no less ambitious — and no less daunting:
- Current data centre capacity is perhaps ~1.4 GW (or ~1.4-1.8 GW), expected to expand to ~9 GW by 2030. In doing so, data centres might consume ~3% of India’s electricity.
- The AI-driven portion of this demand might require an additional 40–50 TWh annually by 2030 — a nontrivial chunk of India’s total electricity generation.
- Already, power demand from data centres in India is rising: in June 2023, data-centre electricity demand was ~139 billion kWh (139 TWh on yearlyized basis) with ~4.4% year-on-year growth.
To put this in perspective: India’s total electricity generation in FY 2024–25 crossed ~1,824 TWh, of which non-fossil sources (solar, wind, hydro, etc.) accounted for ~25 % of generation. Thus, an AI-driven demand of 40–50 TWh is not overnight “grid-shattering,” but it’s also not trivial, especially given distribution, timing, and geographic constraints.
Can India’s grid (and power ecosystem) support this?
The question isn’t just is there enough installed capacity, but can the grid and power systems — generation, transmission, distribution, flexibility — deliver reliable, economical, and sustainable power to AI-scale compute.
Capacity vs dispatch
India’s installed capacity (around 475–490 GW in recent data) includes a substantial share of renewables, hydro, and fossil sources.But capacity is not the same as real-time supply. Even if a gigawatt of solar is installed, it doesn’t mean you can reliably run a data centre at full load at midnight.
Key challenges:
- Temporal mismatch: Many data-centre loads are continuous or nocturnal (training running overnight), whereas renewables like solar provide most output during daytime. Wind and hydro can help, but their variability and seasonality matter.
- Grid flexibility & storage: Without sufficient storage (batteries, pumped storage) or grid flexibility, balancing supply and demand becomes harder — and pricing volatility or curtailment risks rise.
- Transmission & distribution bottlenecks: The location of compute hubs often differs from ideal renewable resources. Building new lines, strengthening load corridors, and managing local grid constraints is expensive and time-consuming.
- Peak demand stress: Sudden surges in compute or inference loads can stress local feeders. AI clusters may exhibit spiky loads rather than smooth ones.
Opportunities from compute-grid interaction
Interestingly, AI-heavy data centres also offer new flexibility possibilities:
- A recent study shows that AI-focused HPC/data centres can provide grid flexibility services (e.g., demand shifting, fast ramp-down/up) at ~50% lower cost than general-purpose HPC centres, by aligning computation schedules with grid conditions.
- Another line of work develops energy management systems (EMS) for renewable-colocated AI data centres, which co-optimize AI workload scheduling, on-site renewable use, and grid interactions to reduce cost and emissions.
- Forecasting shorter-term power demand for AI data centres (e.g. with ML models) can improve integration with grid dispatch and reduce overprovisioning.
Thus, AI clusters need not be passive loads — properly designed, they can become semi-responsive grid assets.
Cooling, water, and site constraints
Beyond electricity, AI data centres bring huge challenges in thermal management and water usage:
- AI workloads generate extreme heat densities — 70–150 kW per rack or more — making older cooling systems inadequate.
- In a water-stressed country like India, heavy water-cooled systems pose sustainability risks. Some firms in India are adopting liquid cooling, rear-door heat exchangers, or closed-loop cooling with minimal water usage.
- Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE) become critical metrics. Yotta, for example, claims PUE ~1.4 and negligible incremental water consumption through efficient cooling design.
Therefore, site selection must account for water availability, local climate, and cooling infrastructure — not just power.
The Indian policy, state, and industry angle
Given these technical challenges, how is India positioning itself — via states, central policy, and industrial moves — to tackle the compute-energy puzzle?
State strategies and land-of-opportunity
Andhra Pradesh is making a bold bet: cheap renewable energy (solar + wind), surplus Godavari water, battery and pumped storage projects, and intent to attract AI compute with renewable-only corridors. Its pitch: make data centres and AI clusters a distinct “slice” of the grid powered wholly by renewables.
Other states are also preparing:
- Karnataka is reportedly upgrading transmission to 765 kV to support heavy loads from data centres.
- States with strong wind/solar resources (Rajasthan, Gujarat, Tamil Nadu) will likely be hotspots for AI compute hubs.
Center-level policy levers
The Indian government has recognized this coming crunch. MeitY and the Ministries of Power and Renewable Energy are reportedly collaborating to ensure that AI/data centres get special attention in power planning.
Possible policy levers include:
- Green incentives: Tie subsidies or land/or power allocation to sustainability metrics (renewables share, PUE targets, water efficiency), akin to how EV subsidies are tied to efficiency.
- Infrastructure status for data centres: Many states already provide incentives (e.g. lower transmission charges, open access). Ensuring consistency across the country will help.
- Time-of-day tariffs / differential pricing: Encourage compute loads to run during off-peak or renewable-rich windows.
- Mandated flexibility/curtailment clauses: Data centres may allow curtailment or ramp-down to help grid balancing.
- Grid investment: Preemptive strengthening of high-voltage corridors, storage build-out (battery, pumped hydro), demand-response aggregation, and smart-grid integration.
Industry and investor trends
The capital markets are beginning to see “green AI” as a differentiator. Investors are increasingly asking for renewable-powered data centre models, efficient cooling, and sustainable design metrics.
On the industry side:
- Large players like Google, Microsoft, and hyperscalers are designing their next-gen data centres to run on 100% renewable energy, with battery buffering and scheduling intelligence.
- Chip and hardware vendors are optimizing for performance-per-watt, pushing more efficient GPU architectures.
- Indian start-ups developing AI compute infrastructure (clusters, middleware, orchestration) are being pressured to embed energy-efficiency as a design constraint, not an afterthought.
Risks, trade-offs, and unanswered questions
Even with aggressive planning, several caveats remain:
- Carbon lock-in: If initial AI clusters rely on fossil-backed grid power, they risk locking in emissions for a decade or more. Retrofitting to renewables later can be costly.
- Grid reliability: India’s grid still faces transmission losses, congestion, blackouts, and distribution-level fragility. A sudden surge in compute load (in one region) could destabilize local grids.
- Land and resource constraints: Renewable projects need land; data centres need power and water. These factors can conflict with agriculture, ecology, and local utility planning.
- Equity and regional imbalance: AI hubs may cluster in certain states, exacerbating regional disparities in energy investment and grid focus.
- Uncertainty in AI growth trajectory: If AI adoption slows, or model size plateaus, forecasts could overshoot. Conversely, if generative AI or multimodal compute demands explode, even the most aggressive scenarios may be too conservative.
The road ahead: building a responsible AI-powered grid
Here are key strategic moves India must double down on:
- Co-locate compute and renewables — wherever feasible — to reduce transmission burden and maximize “local green power” matching.
- Design for flexibility — AI workloads must allow dynamic scheduling, throttling, or ramping based on grid conditions.
- Mandate efficiency metrics — require minimum PUE, WUE, renewable fraction benchmarks for AI-scale data centres to get incentives.
- Accelerate storage and grid flexibility — invest heavily in battery storage, pumped hydro, demand response, smart grid.
- Regional capacity planning — state-level power planning must explicitly incorporate AI/compute load growth, not as an afterthought.
- Transparent energy accounting — data centre operators and AI projects should publish energy consumption, carbon attribution, and efficiency metrics.
- R&D in cooling, chip efficiency, and workload reduction — advanced cooling (liquid, immersion), efficient architectures, and algorithm-level energy optimization will be key.
India stands at a critical juncture: its ambition to become a global AI player is real and accelerating (as evidenced by the Google-Visakhapatnam push). But that ambition must wrestle with the realities of energy, water, infrastructure, and climate.
In short: India can power the AI revolution — but only if compute planning is deeply entwined with power planning from day one. It’s not enough to pour GPUs into data centres; what matters is how those GPUs are fed, cooled, managed, and balanced within a rapidly evolving grid ecosystem.
The true test of India’s AI credentials will not simply be in model benchmarks or chips per watt — it will be in whether its energy systems can absorb, adapt, and sustain the hunger of a new age of intelligence.