

Scaling LLMs: Parameters or Thinking or Data?
How do we truly scale the intelligence of large language models—by adding parameters, giving them more “thinking” time at inference, or exposing them to richer, task-specific data?
Join Prateek Jain — Principal Scientist & Director at Google DeepMind, and Research Lead on Gemini Model Design — as he unpacks the science and engineering behind scaling LLM quality. This talk will explore how different levers of scale (model size, test-time compute, and data curation) interact, and what that means for building the next generation of AI systems.
Here are some details:
📆 Date: 22nd Aug, 2025
⏰ Time: 5 to 6 PM onwards
📍 Location: Together Fund, Indiranagar, Bangalore
👤 About the speaker
Prateek Jain is a Principal Scientist and Director at Google DeepMind, where he leads research on Gemini model design. With a distinguished career in machine learning and optimization, Prateek has worked at the frontier of scaling laws, model efficiency, and data-driven intelligence. His research and leadership continue to shape how cutting-edge AI models are designed, scaled, and deployed.