

GPU After Hours: How Teams Procure GPUs at Scale
About the event
Free credits, launch promos, and early cloud assumptions don’t hold forever — especially once teams are running sustained training or inference on real workloads.
GPU After Hours is a relaxed, after-work gathering for funded AI startups where compute is no longer an experiment, but something that needs to be planned, procured, and locked in.
This evening is for teams who are:
Past early experimentation and into sustained training or inference
Running H100s, H200s, or comparable GPUs across hyperscalers, neoclouds, or hybrid setups
Thinking seriously about capacity planning, contracts, and long-term cost, not just benchmarks
RSVPs are curated to keep the room high-signal.
We’ll keep it intentionally simple:
A short fireside conversation on how GPU procurement changes as teams scale
A few sharp market insights on capacity, pricing, and availability
Plenty of time for off-the-record conversations over drinks
Event timeline
5:30 – 6:00 PM
Doors open
Arrivals, drinks, and informal intros.
6:00 – 6:30 PM
Open networking
Meet other funded AI founders and infra leaders. No agenda, just good conversations.
6:30 – 7:15 PM
Fireside chat
A candid conversation with Carmen Li on how teams procure GPUs at scale — moving from on-demand usage to reserved capacity, contracts, and long-term planning.
7:15 – 8:00 PM
Wrap-Up & networking
Drinks, follow-up conversations, and small group discussions.
Featured Speaker:
Carmen Li is Founder & CEO of Silicon Data and CEO of Compute Exchange, building the financial infrastructure for global GPU compute markets. Previously at Bloomberg and Citi, she brings a market-structure perspective on how pricing and data shape decision-making at scale. Through Compute Exchange, she works closely with funded AI teams and infrastructure providers to help translate market signals into real procurement decisions, reserved capacity, and long-term compute strategy.
Hosted by
Compute Exchange helps funded AI startups secure and compare reserved GPU capacity across neocloud providers.
Instead of relying on spot availability or short-term incentives, teams use Compute Exchange to plan compute ahead of demand, lock capacity, and bring predictability to cost and performance.