Eigen AI Workshop & Party on Efficient AI Computing @ NeurIPS 2025
Join us for an evening devoted to efficient AI computing @ NeurIPS 2025, with a focus on making large language models more compute- and energy-efficient across the entire lifecycle: pre-training, post-training, reinforcement learning, inference, and systems design.
The event will bring together faculty, researchers, and industry practitioners for a series of short technical talks and discussions on:
Efficient pre-training – data, objectives, and architectures that improve sample, compute, and memory efficiency.
Efficient post-training – instruction tuning, preference optimization, and distillation under strict efficiency constraints.
RL for efficiency and control – reinforcement learning for policy optimization, routing, and adaptive resource-aware behavior.
Inference-time optimization – sparsity, quantization, caching, and decoding strategies for low-latency, cost-effective serving.
System and hardware efficiency – large-scale training/serving infrastructure and algorithm–systems–hardware co-design for end-to-end efficiency.
We will discuss how efficiency, scalability, and co-design jointly shape the next generation of LLM training and deployment in both research and production environments. Following the talks, participants are invited to continue the conversation over a casual dinner, with space for deeper technical exchange, networking, and exploring future directions in efficient AI at scale.
✨ Special Guests — Invited Speakers
🎓 Jason Cong — Distinguished Professor @ UCLA and Member of the US National Academy of Engineering, pioneering researcher in electronic design automation, FPGA design, and domain-specific/customizable computing.
🎓 Song Han — Associate Professor @ MIT Leading expert in efficient AI computing and hardware-aware machine learning.
🎓 Zhijian Liu — Assistant Professor @ UCSD Researcher focused on efficient deep learning systems and algorithms.
🎓 Tianlong Chen — Assistant Professor @ UNC Chapel Hill Specialist in optimization, sparse modeling, and efficient learning.
🎓 Banghua Zhu — Assistant Professor @ UW; Principal Research Scientist @ NVIDIA; Prior co-founder at Nexusflow AI; working on RL
🎓 Fangyu Liu — Staff Research Scientist @ Google DeepMind working on Gemini pretraining
🎓 Yuchen Zhuang — Research Scientist @ Google DeepMind focusing on RL and post-training
🎓 Ryan Hanrui Wang — CEO @ Eigen AI Bridging the gap between academic research and industrial AI infrastructure.
…More invited speakers to be announced 🔜
Your Host: Eigen AI
Eigen AI is a full-stack AI infrastructure platform built to make modern AI systems faster, lighter, and truly scalable.
From data generation → training → compression → acceleration → deployment, Eigen AI enables teams to ship high-performance systems at a fraction of the cost, latency, and complexity.
We believe the next leap in AI will be defined by efficiency at scale — and we’re building the infrastructure that makes it possible.
