

Stanford x MIT Summit: The AI Hardware Stack - From Chip to Chiller
The AI Hardware Stack: From Chip to Chiller
Power, Cooling & Rack-Scale Systems for the AI Era
This event explores the physical hardware systems powering the next era of AI, from advanced chip design and AI servers to power delivery, liquid cooling, rack-scale architecture, and data center infrastructure.
As AI workloads move from training into large-scale inference, robotics, and real-world deployment, the bottlenecks are becoming physical: power density, thermal limits, memory bandwidth, rack-level integration, energy availability, and the operational complexity of running high-density compute at scale.
This evening brings together researchers, engineers, founders, investors, and operators working across chip design, data center hardware, AI infrastructure, thermal systems, power systems, and large-scale compute.
Hosted by SFPlayground, Universal AI Services, and Aexodus Capital, in collaboration with the Stanford Alumni Club and the MIT Club of Northern California.
Topics on the Table
AI Chips & Accelerators
GPUs, ASICs, memory, and advanced packaging are becoming the compute engine behind modern AI. This discussion will explore what is changing at the silicon level and why these systems demand more power, bandwidth, and thermal performance.
Powering AI at Scale
Power is becoming one of the biggest constraints on AI growth, with global data center electricity demand projected to roughly double by 2030. This discussion will cover energy availability, grid capacity, power delivery, and where next-generation AI data centers can realistically scale.
Cooling High-Density AI Hardware
AI systems are producing more heat at the chip, board, and server level, pushing cooling design into a central role. This discussion will explore how thermal engineering, liquid cooling, and server-level design are evolving to support higher-density AI workloads.
Standards & Deployment Readiness
Dense AI systems are forcing data centers to rethink deployment, monitoring, serviceability, and interoperability. This discussion will explore what needs to become more standardized, from rack interfaces to facility readiness, for AI hardware to scale reliably.
Speakers & Panelists
Featured speakers and panelists will be announced soon.
Schedule
5:30 PM — Doors and Check In
6:30 PM — Program and Talks
7:30 PM — Dinner and Networking
9:00 PM — Event Concludes
Registration & Attendance
This event is open to:
Stanford, Harvard, and MIT Alumni
UAIS & Aexodus affiliates and partners
Chip architects and AI hardware founders
Data center leaders, infrastructure operators, and engineers
Investors, LPs, and VCs
Strictly limited capacity. This gathering is designed to maintain the "high-signal" environment established in our previous series, ensuring every conversation moves the needle for the Physical AI industry.