

AGI Talks: A night with AI researchers (India AI Impact Summit)
IndiaAI AGI Talks is a casual, after-hours get-together during AI Impact Summit for people who like their conversations high-bandwidth and their questions slightly uncomfortably sharp. Think less “event” and more “hallway track with better snacks” with drinks, good food, and a proudly geeky vibe: papers, edge cases, odd model behaviors, and the questions you only ask when nobody’s optimizing for applause.
We’re hosting this because the models are getting smarter and the plot is getting weird. The question isn’t “impressive?” It’s “what is this thing becoming, how does it generalize, and where does it crack as we approach AGI?”
This night is for people who build, study, or quietly obsess over the frontier: professors and students, research scientists, the low-key AI heavy hitters, investors who think in decades, and founders with strong views on training dynamics. Low formality, high curiosity. Come with sharp takes, sharper questions, and the kind of counterexamples that ruin weak arguments (in a good way).
Expect strong opinions, playful skepticism, and lots of “cool, show me the failure case.”
Informal agenda (aka: likely rabbit holes):
Paths to AGI: scaling laws vs architectural shifts vs agentic scaffolding, and what “general” even means
RL scaling beyond today’s recipes: what breaks, what replaces RLHF, and what “beyond GRPO” might look like
Alignment faking & deception: threat models, early signals, and what would actually count as evidence
Red teaming that matters: adversarial eval loops, automated attacks, and where “safety testing” becomes theatre
The black-box problem: when interpretability helps, when it doesn’t, and what we do in the meantime
Agent autonomy: when we start seeing agents do real exploration, and what new failure modes show up
SSI as an agenda: what should be shared, standardized, open-sourced, and stress-tested together
No pitches. No performative hot takes. Just honest, technical conversation with Chatham House-ish norms: take insights, don’t quote people.
Attendance is limited. RSVP/request required. Location near Bharat Mandapam will be shared closer to the summit.
About Lexsi Labs:
Lexsi Labs is an AI safety and alignment lab building core foundations for Safe Superintelligence, with teams across London, Paris, and Mumbai. We work at the frontier where capability meets control, spanning reinforcement learning, alignment, mechanistic interpretability, and tabular foundation models. Over the past 12 months, we’ve published 13+ papers advancing key building blocks of SSI, including GBMPO (a new RL geometry that moves beyond KL-based optimization), MI-driven model steering, DLBv2 for scalable interpretability, and a full stack for training and deploying tabular foundation models, including OrionMSP as a top-performing TFM. Our north star is to build systems that can improve continuously without losing the thread of human intent. Learn more about research here - Research Papers