

Search Has a New Agentic User: What Now?
Hosted By HORNET.dev
Venue: 6-7 St Cross St https://work.life/event-space-london/event-space-farringdon/
For decades, we optimized everything around search for one user: humans. The entire stack was optimized for short keyword queries, millisecond latency, and delivering top-ten blue links to an impatient human.
Now, search has a new user: agents.
This shift introduces significant changes. An agent doesn't scan impatiently a list of ranked snippets in a top down manner. Agents don't issue a single short query of keywords; it can run thousands of well formulated natural questions in a relentless loop.
This new agentic search workload requires a new way of thinking about search but also search infrastructure:
What does it look like when the "user" can read 100 documents at once?
How do you build search infrastructure for agents to handle the new workload?
And how do you even evaluate search when human-centric metrics like NDCG and MRR (which measure human impatience) no longer apply?
This event is for AI builders, search practitioners , and product leaders who are now moving past "RAG 101" and exploring the new frontier of search. Join us to learn about the new patterns.
Speakers:
Lester Solbakken, co-founder HORNET.dev
Olena Gorbatiuk, Search Product Manager
Charlie Hull, Search Expert
Erik Schwartz, AI Executive
Location: Work.Life Faringdon
18:00 Doors open & check-in
Grab a drink. Chat with other builders.
18:30 Humans vs Agents. The new era of Search
Search is no longer just about humans typing queries into a box. AI agents are emerging as a new class of “users,” and they search in very different ways. In this talk, I’ll explore how human and agent search differ, why this matters, and what product managers and builders should do to prepare for a world where search must serve both people and machines.
Presented by Olena Gorbatiuk.
19:00 What the shift means for evaluating search
We are no longer optimizing search for the "top-three" snippets. An agent doesn't care if the relevant context is in position 1 or 2. Metrics like MRR (Mean Reciprocal Rank) and NDCG (Normalized Discounted Cumulative Gain) are built on flawed scanning assumptions that don't translate to the new user of search: agents.
Presented by Lester Solbakken.
19:30 A No-BS panel
When is a simple RAG pipeline good enough? When is the complexity of an agentic search system justified? What about multimodal and embeddings? and is BM25 the answer to everything? We’ll open the mic.
20:15 Drinks & debrief
Corner the speakers or network with other builders.