

Daytona & Aikido AI Builders - SF, February 2026 @Datadog
โAn event dedicated to exploring all things AI Engineering!
โEvent partners: Aikido & Datadog
โAgenda
โโโโโ๐ 5:30 pm โ 5:40 pm
Welcome and Opening Remarks
โโโโโ๐ค Stipe Lelas, Growth Marketing Manager at Daytona & Asako Hayase, Pacer, Community Ambassador at Daytona
โ๐ 5:40 pm โ 5:55 pm
Talk "Cloud Sandboxes for AI Coding Agents"โ
โโโโโ๐ค James Murdza, Pacer, Community Ambassador at Daytona
โโโโโโโOutline:
โWeโll demo how to run AI coding agents, including Claude Code, Codex, and OpenCode, in secure cloud-based sandboxes. Weโll also evaluate integration techniques and look at the security implications of deploying coding agents in cloud environments.
โ๐ 5:55 pm โ 6:15 pm
Talk "Get Security Done: Streamlining Application Security with Aikido"
โ๐ค Madeline Lawrence, Chief Growth Officer at Aikido Security
โโโโโโโBio:
โAfter founding a micro-fund fresh out of law school, Madeline spent the past few years rising through the ranks of European VC, eventually becoming a partner at a โฌ150M fund by the age of 25, with viral pranks and a punchy critique of the industry following in her wake. She then made the jump to the light side, leaving her career in venture capital for an operator role, and is now delivering "no bullshit security" to developers worldwide as CGO of Aikido.
โโโโโโโOutline:
โSecurity is often seen as a killer of productivity when tools flood developers with false positives and break their flow. In this session, we explore a more streamlined approach to application security by integrating multiple scanners directly into the SDLC while cutting down noise. Weโll also demonstrate how AI can help fix vulnerabilities and generate compliance reports automatically, so teams can spend less time triaging alerts and more time shipping code.
โ๐ 6:15 pm โ 6:25 pm
Talk โMonitoring and Evaluating Agents to speed up iteration timeโ
โ๐ค Charles Jacquet, Datadog
โโโโโโโOutline:
โDatadog LLM Observability is the AI agent engineering platform helping engineers monitor, evaluate, and ship more reliable AI agents faster.ย
โEvery AI engineer knows the pain: you ship an agent update, and you're not sure if it actually got better โ or just differently broken.
โIn this live demo, we'll walk through how Datadog LLM Observability helps you close that loop faster.
โWe'll cover three pillars of a faster, more confident iteration cycle:
โTracing โ End-to-end visibility into your agent's behavior in production. See exactly what's happening across every call, tool use, and model response.
โOnline evaluation โ Automatically flag regressions and quality issues as they happen in production, so you catch problems before users do.
โOffline evaluation with Experiments โ Before you ship, run structured experiments to compare prompts, models, and tools against versioned datasets. See exactly how a change performs โ and ship with confidence.
โThe result: a tighter feedback loop from idea to production, with the observability data to back every decision.
โ๐ 6:25 pm โ 6:35 pm
Talk โConnecting Agents to the Webโ
โ๐ค Sofia Guzowski, Tavily
โโโโโโโOutline:
โTavily is a developer-first web search API built to help teams retrieve high-quality results from the web. In this talk, weโll introduce Tavily and walk through the core endpoints: Search, Extract, and Crawl, with a live demo. Weโll then highlight Tavilyโs newest product, the Research Endpoint, and showcase its SOTA performance through a live demo.
โ๐ 6:35 pm โ 6:45 pm
Talk "What Comes After Search?"
โ๐ค Simantak Dabhade, DevRel and Growth at TinyFish
โโโโโโโOutline:
โEvery generation of search, from Yahoo's Portal to Google's page rank and now AI-powered answer engines, made finding easier information easier. But as the internet evolved from just large set of indexed pages to these complex webapps; where people work, actions are taken and data sits behind walls of steps, none of the prior tools are built for this complexity.
โ
TinyFish introduces a new category: web agents that don't just find information, but unlock the ability to execute complex workflows across websites. In this talk, we'll walk through why search hit a ceiling, what it looks like when AI agents actually operate the web, at scale, and how developers can start building with production-grade web agent infrastructure today.
โโโโโโโโโโ๐ 6:45 pm - 8:30 pm
โโNetworking
โWith pizzas and beverages
โโ________________________
โโAbout event
โDaytona AI Builders Series brings together the people building the next layer of AI infrastructure. Each month we focus on practical problems for scaling agents โ from secure execution and cost control to reliability and new protocols. The goal: equip builders with the tools and insights to ship agent-native systems into production.