Cover Image for LangSmith 101: Debug, Evaluate, and Ship Reliable AI Agents
Cover Image for LangSmith 101: Debug, Evaluate, and Ship Reliable AI Agents
Avatar for LangChain Events
Presented by
LangChain Events

LangSmith 101: Debug, Evaluate, and Ship Reliable AI Agents

Zoom
Registration
Welcome! To join the event, please register below.
About Event

Join us for a live demo of on how leading engineering teams use observability and evaluation to ship reliable, production-ready AI agents faster with LangSmith.

Learn the foundations for understanding, improving, and confidently deploying AI agents. Get practical steps on how to debug non-deterministic agent behavior, iterate on performance, and ship reliably across any frameworks and models.

What you’ll learn:

- How to get end-to-end visibility into agent behavior with tracing for each agent step, tool calls, conversation, latency, error, and token count.

- Practical ways to debug and improve agents using production traces, insights, and iterative prompt/tool refinement.

- How to set up evaluations (datasets, experiments, and subject matter expert annotations) to measure quality and prevent regressions.

Join us to see LangSmith in action, get your questions answered live with our deployed engineering team, and leave with a clear playbook for observing, evaluating, and shipping production ready agents.

Avatar for LangChain Events
Presented by
LangChain Events