

AI Factories for the Next Decade - The India Blueprint
The Question : Every generation of compute has demanded a new generation of infrastructure. The shift to AI workloads is no different, except the gap between what yesterday's facilities were optimized for and what AI demands today is wider, and the window to get ahead of it is shorter than most infrastructure cycles allow.
Building AI-ready infrastructure from the ground up looks different. The decisions made at the design stage, on power, cooling, rack density, fiber, and land, determine the economics of every workload that runs on it for the next 15 years. Getting those decisions right requires understanding where AI infrastructure is going, not just where it is today.
The Session : A working roundtable with data center architects, infrastructure developers, power and cooling engineers, and enterprise AI buyers examining what it actually takes to build infrastructure that doesn't need to be rebuilt in five years.
What We're Examining:
Power architecture for AI-dense workloads: what the shift from 10kW to 150kW+ per rack means for facility design
Cooling at scale: liquid cooling, immersion, and hybrid approaches, where the economics land today
Land, fiber, and power procurement: the decisions that happen before a single rack is installed and why they determine everything downstream
Modular vs. purpose-built: which design philosophy wins for AI infrastructure at different scales
The 15-year infrastructure bet: designing for workloads that don't exist yet without overbuilding for today
Who Should Attend : Data center developers and EPC firms, power and utilities partners, enterprise infrastructure architects, hyperscaler and neocloud facility teams, real estate and infrastructure investors, government officials managing industrial and power zone allocations, enterprise AI buyers whose deployment scale requires owning or co-developing infrastructure.