

Inference After Dark
If you’re coming to GTC to find more GPUs, you’re solving last year’s problem. For many teams, the real constraint on AI growth is now power, not chips.
What is your power strategy when inference scales?
Join a small group of operators for a focused discussion about how teams are:
Dealing with power ceilings while AI inference demand keeps climbing
Aligning type of latency workloads with infrastructure optimized for it - not all workloads need the same SLA
Rethinking how they can unlock capacity for inference in months, not years
Who this is for:
CROs and GMs responsible for AI inference revenue
Infrastructure leaders running GPU clusters or AI/ML platforms
AI operators who feel power and capacity constraints in their day‑to‑day
If you are serious about scaling inference, this is your room.