

Langfuse Context: How Superhuman built Agents that are used by millions
For this edition of Langfuse Context we join forces with Superhuman to create a cozy engineering meetup to connect and share learnings on how to build with and for AI applications.
Most teams build agents for a handful of users—or build agents that never get adopted at all. Superhuman is different. Their AI features are used by millions daily.
Langfuse's CEO Marc Klingen will be joined by Ailian Gan, Director of AI Product at Superhuman, to chat about how they approach building agents at scale, covering:
Product team structure and how they organize around AI features
Identifying good problems to solve with AI by finding the intersection between what LLMs do well and what users actually need
Agent interfaces to foster adoption with different user personas ranging from very sophisticated to mass market
Agenda
5:45 PM — Doors open
6:15 PM — Fireside conversation
6:45 PM — Audience Q&A
7:00 PM — Networking with drinks and food
Audience
This event is perfect for engineers, PMs, and technical founders who want to learn how to build AI products that people actually use—not just demos that impress investors.
Space is limited. We'll manually review all applications to curate a great crowd for networking.
About the Hosts
Superhuman (formerly Grammarly) is the AI productivity platform on a mission to unlock the superhuman potential in everyone. The Superhuman suite of apps and agents brings AI wherever people work, integrating with over 1 million applications and websites. The company’s products include Grammarly’s writing assistance, Coda’s collaborative workspaces, Mail’s inbox management, and Go, the proactive AI assistant that understands context and delivers help automatically. Founded in 2009, Superhuman empowers over 40 million people, 50,000 organizations, and 3,000 educational institutions worldwide to eliminate busywork and focus on what matters.
Langfuse is an open-source LLM engineering platform that helps developers build better AI applications through observability and analytics. We provide tools to trace, evaluate, and monitor production LLM systems, making it easier to debug issues, optimize costs, and improve output quality. Our platform supports prompt management, dataset curation, and detailed performance tracking across the entire LLM development lifecycle.