Cover Image for Scaling LLMs: Parameters or Thinking or Data?
Cover Image for Scaling LLMs: Parameters or Thinking or Data?
105 Went

Scaling LLMs: Parameters or Thinking or Data?

Hosted by Devansh Swarup, Kaushik Srinivasan & Shivam
Registration
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

How do we truly scale the intelligence of large language models—by adding parameters, giving them more “thinking” time at inference, or exposing them to richer, task-specific data?

​Join Prateek Jain — Principal Scientist & Director at Google DeepMind, and Research Lead on Gemini Model Design — as he unpacks the science and engineering behind scaling LLM quality. This talk will explore how different levers of scale (model size, test-time compute, and data curation) interact, and what that means for building the next generation of AI systems.

​Here are some details:

​📆 Date: 22nd Aug, 2025

​⏰ Time: 5 to 6 PM onwards

​📍 Location: Together Fund, Indiranagar, Bangalore

​👤 About the speaker

Prateek Jain is a Principal Scientist and Director at Google DeepMind, where he leads research on Gemini model design. With a distinguished career in machine learning and optimization, Prateek has worked at the frontier of scaling laws, model efficiency, and data-driven intelligence. His research and leadership continue to shape how cutting-edge AI models are designed, scaled, and deployed.

Location
Together Fund
XJFV+F4V, 3rd Cross Rd, HAL 2nd Stage, Indiranagar, Bengaluru, Karnataka 560038, India
105 Went