

Agentic Retrieval SF Meetup
Everyone assumes better retrieval and better reasoning will produce better outcomes for agents. We found the opposite. When plausibly wrong context gets promoted into an agent's reasoning loop, more capable models make more confident mistakes. This is the core problem of agentic retrieval, and it changes how retrieval infrastructure needs to be built.
Hornet is the retrieval engine built for agents. At this meetup, we'll share a brief introduction to Hornet and the problem space, followed by a talk from Lester Solbakken on what actually goes wrong when agents control their own retrieval. Then Bryan Bischof, Head of AI at Theory Ventures, and Till Döhmen from MotherDuck join for a panel discussion on the state of agents, retrieval, and where the infrastructure needs to go next.
Lester's talk covers the failure modes that emerge when retrieval metrics improve but agent behavior degrades: plausible distractors that survive ranking, inverse scaling under noise, and error compounding across multi-step loops. He'll walk through the design patterns that defend against these failures, from stricter evidence selection to sufficiency checks before acting.
Schedule
5:00 PM Doors open, networking, food & drinks
6:00 PM Welcome and introduction to Hornet
6:15 PM Talk: Smarter models, worse answers (Lester Solbakken)
6:45 PM Panel discussion with Bryan Bischof and Till Döhmen
7:15 PM Open networking
8:30 PM Wrap
Who should come
Engineers and technical leaders working on agent systems, retrieval infrastructure, or search. If you've spent time debugging why your agent's context window was full of the wrong documents, this is your crowd.
Free. Limited spots.