

LLMs in Prod w/ Exa & Portkey
AI adoption has moved from pilots to production use cases. Every team is deploying LLMs into production, but what does it take to make this work organization-wide, and at scale?
At LLMs in Prod: SF Chapter, you’ll hear directly from AI leaders at top companies already running mission-critical LLM systems. They’ll share the use cases delivering real value, the lessons learned along the way, and the strategies they use to keep AI reliable and governed.
We’ll also explore the Model Context Protocol (MCP)—its role as a foundation for AI agents, and what early implementations reveal about interoperability, governance, and scaling AI safely.
This is a space for AI leaders and teams to learn from peers who are already operating production AI at scale.