

Ensuring Accurate AI Agent Paths through Tracing and Evaluation
Modern AI agents don’t always follow a straight path — and they don’t always get it right. As agent-based applications move from prototype to production, observability becomes critical to ensuring correct, efficient, and trustworthy behaviors.
Join Arize AI, AWS, and CrewAI for an evening focused on tracing and evaluating AI agent workflows. We’ll explore real-world techniques for gaining visibility into agent decisions, verifying tool usage and parameters, and implementing feedback loops that drive continual improvement. Learn how to debug, optimize, and monitor agents in dynamic, non-deterministic environments with the help of modern observability tools and agent frameworks.
Whether you're building with Amazon Bedrock, CrewAI, or open-source frameworks like Strands, this event is for developers building the next generation of agentic systems.
5:30 PM – 6:30 PM
Registration & Networking
6:20 PM – 6:40 PM
The Case for Agent Observability
Jason Lopatecki, CEO of Arize
6:40 - 7:00 PM
Agentic AI on AWS
Karan Singh, Agentic AI Product Lead at AWS
7:00 PM – 7:20 PM
Deploying Enterprise Agents Into Production
Joao Moura, Co-Founder CrewAI
7:20 PM – 7:50 PM
Instrumenting and Evaluating AI Agents at Scale
AWS + Arize + CrewAI
7:50 PM – 9:00 PM
Networking Reception
Please Note: Space is limited and registration does not guarantee entry. Attendance is based on a first come, first served basis. A valid, government-issued physical ID is required for entry into the facility.