

Build a Customer Support AI Agent with LangGraph, LangSmith, and Human-in-the-Loop
AI agents have moved from research demos to production roadmaps, but most tutorials stop at "the LLM called a tool". Production agents need something more: conditional routing, durable state, human oversight, and observability you can actually debug with.
In this hands-on session, we'll build a production customer support agent end-to-end — one that triages incoming tickets, pulls from a knowledge base, executes real account actions, and escalates to a human reviewer with full trace visibility whenever confidence drops.
Along the way we'll cover the 2026 agent landscape, where LangChain fits vs where LangGraph takes over, and how LangSmith closes the loop with tracing, annotation queues, and evals. You'll leave with a complete, runnable notebook and a clear decision framework for your own agent projects.
Prerequisites: comfort with Python and basic familiarity with LLM tool-calling. No prior LangGraph experience required.