


PyTorch Afters: The future of AI infra for RL + large-scale inference
Join us as we kick off the PyTorch Conference — whether you’re attending or simply based in the SF Bay Area. Together, we’ll explore the cutting edge of AI infrastructure and its role in shaping the future of RL for post-training and large-scale inference of media and world models.
At this session, the DataCrunch team and frontier AI labs will share the learnings from building and scaling systems that push the state of the art. You’ll get a first look at B300 and GB300 NVL-72 systems, and what the future holds for AI infra.
Learn from practitioners, connect with like-minded engineers, and unwind over food, drinks, and sharp discussions.
Speakers
Training world models using B200s | Paul Chang - ML engineer at DataCrunch
Training quantized LLMs efficiently on consumer GPUs | Erik Schultheis - Posdoctoral researcher at IST Austria
Agenda
5:30pm – Arrival
6:00pm – Talks + Q&A
7:00pm – Networking, food, & drinks
9:00pm – Wrap-up
Who Should Join?
AI researchers
ML engineers
Technical founders
AI product managers
This event is for those staying ahead of the curve with AI infra, optimization techniques, and production-grade systems at scale.
Have questions about our events or just DataCrunch in general?
Check out our Discord and Dev Community — it’s where everyone is already talking about the event, trading ideas, and finding out what DataCrunch is all about.
About DataCrunch
DataCrunch is a provider of Cloud Infrastructure for AI builders – trusted by the AI frontier labs (1X, PrimeIntellect) and enterprises alike. DataCrunch offers production-grade GPU clusters and inference services – among the first to deploy the B200, B300, and GB300 platforms.
Other hosts and speakers (TBA)
