

DeepAgent, Observability, and SLOs: Building Better Systems with Elastic
Join the Elastic Seattle User Group on Thursday, January 29 for an evening of hands-on technical talks, community networking, and pizza 🍕
This meetup features two practitioner-focused sessions covering observability & SLOs and AI agents for developer workflows, with real demos and patterns you can apply right away.
✅ Why attend
Learn how teams operationalize SLOs and reliability with Elastic Observability
See a practical demo of building an AI-powered research agent using DeepAgent
Connect with Seattle engineers, SREs, and Elastic practitioners over food and drinks
📅Date and Time:
Thursday, January 29th from 5:30-7:30 pm PST
📍Location:
999 3rd Ave SUITE 700, Seattle, WA 98104 - We'll be in the Sunset Beach room
🚗 Parking:
The building has paid, secure onsite parking located on Madison St between 2nd and 3rd Avenues
Book a spot on SpotHero
🪧 Arrival Instructions:
Upon arrival at 999 3rd Ave, head to the Surf Incubator on floor 7. The meetup will take place in the Sunset Beach room.
Please note: The building locks after 6pm -
The building locks for individuals who park outside of the onsite garage after 6pm, so we recommend arriving at the meetup start time of 5:30 or using the onsite garage if you are running late.
If you park in the onsite garage, attendees can access floor 7 with no restrictions after 6pm.
📝 Agenda:
5:30 pm: Doors open; say hi and eat some food
6:00 pm: Observability and SLOs with Elastic - Rajesh Sharma
6:30 pm: Build your own Developer Advocate with DeepAgent Justin Castilla (Sr. Developer Advocate at Elastic)
7:30 pm: Event ends
👥 Who should attend
Developers, SREs, and platform engineers
Teams working with observability, reliability, or SLOs
Engineers curious about AI agents and modern developer workflows
Anyone interested in tech!
No sales pitches! Just technical content and community discussion.
💭 Talk Abstracts:
Observability and SLOs with Elastic - Rajesh Sharma
This session will show how to move from “we have metrics/logs/traces” to reliable, measurable user experience by defining and operating Service Level Objectives (SLOs) in the Elastic Stack. We’ll cover how to translate business expectations into SLI/SLO definitions, instrument services with Elastic Observability (APM, logs, metrics, synthetics), and use Kibana to track error budgets and detect fast-burning reliability issues before they become incidents. Attendees will leave with practical patterns for choosing meaningful SLIs, setting realistic targets, and wiring burn-rate alerting and dashboards that align engineers and stakeholders around outcomes.
Key takeaways:
- SLO fundamentals: SLIs, targets, and error budgets (what to measure and why)
- End-to-end observability: correlating APM + logs + metrics + synthetics to explain SLO misses
- Operationalizing reliability: error-budget reporting, burn-rate alerting, and actionable runbooks in Kibana
- Adoption patterns: starting small, iterating targets, and avoiding vanity metrics
Demo: “Checkout API SLO in 10 minutes”
A small “Checkout” HTTP endpoint (e.g., POST /checkout) instrumented with Elastic APM, plus a synthetic test that runs the checkout flow every minute.
Build your own Developer Advocate with DeepAgent Justin Castilla (Sr. Developer Advocate at Elastic)
Staying current with emerging frameworks is exhausting and can be swayed by bias. New repositories appear daily, each claiming to solve our problems better than the last. How do we separate the signal from the noise?
In this session, we'll walk through building a multi-agent research system using LangChain's DeepAgents framework. The system uses specialized SubAgents to evaluate technology viability: one tracks GitHub metrics (stars, commit velocity, contributor health), another analyzes issues and discussions for red flags, and a third synthesizes findings into actionable recommendations.
You'll learn how DeepAgents' built-in planning tools, filesystem backend, and sub-agent delegation handle the complexity of parallel research tasks. We'll cover practical patterns for context isolation between agents, when to spawn SubAgents vs. handle tasks inline, and how to persist research findings across sessions using composite backends.
Walk away with a working architecture you can adapt for your own technology vetting workflows.