

The Agentic Lifecycle with Opik
Organizations are deploying multi-step AI agents, but most teams lack the tooling to build them reliably. Errors compound, behavior is non-deterministic, and debugging without observability is nearly impossible. The cost: hallucinated outputs, runaway token spend, lost user trust. This session gives you production-grade observability and evaluation with Opik, fully open source.
📕 What you'll learn:
Tracing your agent's reasoning
How to capture every step with Opik's @track decorator: tool use, retrieval, and decisions
Building evaluation pipelines for agents
Build evaluation datasets, score agent outputs, and iterate rapidly with side-by-side experiments
Live end-to-end agentic build
Code walkthrough building and evaluating a multi-step research agent with Opik and OpenAI
⭐️ Note: Please register for this event on Luma, as well as at the Maven registration link listed above