

The Sycophancy Problem: When Helpful Models Stop Being Honest
Join us as Tejas Nanaware give his talk "The Sycophancy Problem: When Helpful Models Stop Being Honest"
Large language models are trained to be helpful and aligned — but sometimes they agree a little too much. Sycophancy occurs when an LLM changes a correct answer to match a user's incorrect belief, reinforces flawed assumptions, or generates confident-sounding reasoning that masks motivated agreement. As models become more powerful and more agentic, this failure mode becomes increasingly important to understand and address.
In this talk, Tejas will explore why sycophancy emerges through RLHF and reward modeling, how it shows up in chain-of-thought reasoning, and how it's evaluated using benchmarks like SycEval. He'll also dig into recent mitigation techniques — including activation steering and new safety training approaches — and what they reveal about the tradeoffs at the heart of alignment.
If you're building with or thinking critically about LLMs, this is a conversation you won't want to miss.
The Event is hosted at the Old Post Office 433 W Van Buren St. Chicago, IL 60607
Please reference our visitor guide page (https://www.notion.so/focusedlabs/Focused-Chicago-Visitor-Guide-External-fb804f764479438393f43ed5b15a441b#fb804f764479438393f43ed5b15a441b) for detailed information on how to access the building.
The event is in the Stage Coach space located on the second floor. ALL guests must register for the event in advance and will need to check in with security upon arrival.
Agenda
6:00 - 6:30 Networking and Pizza
6:30 - 7:15 Presentation
7:15 - 7:30 Q&A
7:30 - 8:00 Networking and Wrap Up
Agentic Engineering Chicago is sponsored by Focused an exclusive LangChain partner and boutique consultancy dedicated to building production-ready AI systems that work in the real world.
Continue the conversation between gatherings. Join our discord (https://discord.gg/bmdwxs2Q5F)