

Frontier AI Paper Reading Group: Scaling Intelligence — Distributed RL × Generative Perception
Join us for an evening exploring how distributed reinforcement learning, guided diffusion, and multi-objective optimization converge.
We’ll unpack the scaling of RL compute, the Marigold-DC framework (ICCV 2025), and real-world MORL deployments — with insights from leading researchers and practitioners.
Overview
This research-driven salon examines how distributed optimization, generative perception, and multi-objective learning intersect in modern AI systems.
Expect a critical discussion of recent breakthroughs — from large-scale reinforcement learning and diffusion-guided depth completion to adaptive agent architectures.
Featured Papers & Talks
⚙️ The Art of Scaling RL Compute for LLMs
Devvrit Khatri, Rishabh Tiwari et al. share best practices for scaling RL compute — covering distributed training recipes, throughput optimization, and reproducibility challenges.🌸 Marigold-DC (Zero-Shot Depth Completion)
Massimiliano Viola (Stanford University) presents a diffusion-based method for monocular depth completion, achieving state-of-the-art zero-shot generalization (ICCV 2025).🎯 Multi-Objective RL in Production
Pushpendre Rastogi (CTO, Vizops.AI · ex-DeepMind) discusses balancing reward, latency, and reliability in deployed agent systems, and how MORL enables Pareto-efficient policy design.
Why It Matters
As AI evolves from monolithic models to distributed, self-improving systems, the real challenge becomes coherence, not just scale.
This salon connects theory to practice — where scaling laws meet stability and real-world deployment.
Agenda
Time Session
6:00 PM Arrival + Refreshments
6:30 PM Paper Discussion: Scaling RL Compute
7:00 PM Research Talk: Marigold-DC with Massimiliano Viola
7:30 PM Technical Exchange: Multi-Objective RL with P. Rastogi
8:00 PM Closing & Connections
This event is co-hosted by Chemistry VC and AI Collective.