

Durable AI: Infra to Inference with WorkOS (March Edition)
Durable AI is a Bay Area meetup for engineers and technical leaders building AI systems that actually survive production.
This March edition brings together the stack — from infrastructure to inference — featuring speakers from:
Cerebras
Geotab
DigitalOcean
Temporal
WorkOS
It’s a room full of builders and engineers showing how AI systems are actually being built and run today. Every company here powers real production workloads — inference at scale, durable orchestration, persistent memory, enterprise auth, cloud infrastructure. These are the tools teams are using (or evaluating) to ship AI products that last.
If you’re thinking about:
How to structure long-running AI workflows
What inference performance means for architecture
How to make AI systems enterprise-ready
How to operate AI reliably in production
Then this is the space you want to be in.
How It Works
We kick off with fast 5-minute lightning demos.
No fluff. Just: here’s what we’re building, here’s how teams use it in production, here’s what we’ve learned.
Then we open the floor and let you run free!
Each team hosts a table for deeper dives — architecture conversations, live demos, whiteboarding, and direct Q&A with the engineers themselves. Think of it like a mini-conference.
Less passive listening.
More real technical conversation.
More signal per square foot.
Raffle: Engage & Win
Each company will distribute raffle tickets during booth conversations based on the quality of discussion. Collect up to 5 tickets (one per booth), stick around until 7:30 PM, and you could win prizes from the companies here.
The more booths you engage with, the better your chances. Real conversations = real entries.
Who Should Come
AI developers shipping beyond prototypes.
Platform and backend engineers thinking about durability and scale.
AI leaders evaluating production-ready tools for their stack.
If you care about shipping AI systems — not just experimenting with them — come join us in SF!
About the Companies
DigitalOcean - DigitalOcean is the Agentic Inference Cloud, built to help you ship production AI without the hyperscaler overhead. Power intelligent agents on a full-stack platform where performance is high and pricing stays predictable.
Geotab - Geotab is a global telematics and AI platform connecting 5M+ vehicles. We process billions of daily telemetry signals and are building production-grade AI systems that turn raw vehicle data into real-time operational decisions — safely, durably, and at enterprise scale.
Cerebras - Cerebras is the world’s fastest AI inference, up to 15x faster than leading GPUs. Cerebras Inference is powered by our Wafer-Scale Engine (WSE-3) - the world's largest AI chip. Explore our newest open-source model, GLM 4.7, and get free compute at cerebras.ai.
Temporal - Durable execution platform that ensures your AI agents and Workflows survive failures, handle long-running operations, and reliably complete execution even in the face of failures.