Cover Image for Operational AI: Real-Time Analytics in Production
Cover Image for Operational AI: Real-Time Analytics in Production
219 Going

Operational AI: Real-Time Analytics in Production

Hosted by Altinity Inc & 3 others
Registration
Welcome! To join the event, please register below.
About Event

How can we build AI-powered applications that react quickly while maintaining trust and accuracy?

Join the Open Source Analytics Community in Bangalore for an evening dedicated to Real-Time Analytics & AI with Open Source!

As real-time data systems and AI become central to modern applications, from feature-rich dashboards to agent-driven decisioning, developers and builders need practical insights into how to process, analyze, and act on data at low latency.

In this meetup, we’ll bring together engineers, data practitioners, and open-source enthusiasts to explore how real-time analytics frameworks, streaming platforms, and open tooling power real-time intelligence and AI workflows. Expect practical talks, lessons learned from real use cases, and conversations about open technologies that make it possible to ingest, process, and serve data with low latency for AI-driven applications.

Whether you’re working with streams, feature stores, observability pipelines, real-time databases, or AI tools, you’ll walk away with actionable ideas for your next real-time project.

Food and drinks provided!

​Shoutout to Nutanix for sponsoring the venue!


Speakers

  • ​​Josh Lee, Open Source Dev Advocate @ Altinity

  • ​Ayush Sawant, MTS-2 @ Nutanix

  • Sasi Teja, Open Source Community Catalyst @ Kafka & GlassFlow

  • ​Debabrata Panigrahi, Founding Engineer @ Parseable


Description of the Talks

AI Monitoring 101: Why Your Old Dashboard Won't Work for LLMs

Speaker: Ayush Sawant, MTS-2 @ Nutanix

Abstract: As organizations shift from experimental AI to production-grade deployments, traditional infrastructure monitoring is no longer enough. Scaling LLMs and Agentic workflows requires a dual-persona observability strategy that satisfies both: the Admin managing the hardware and the User (Developer/AI Scientist) managing the model performance.

In this session, we'll peel back the layers of AI infrastructure to explore why "standard" observability fails in the world of tokens and GPUs. We w'll dive into a sample production-grade metrics pipeline, detailing a full-stack observability pipeline that translates raw telemetry into actionable insights. Key takeaways include:

  • The Admin’s Dashboard: We'll discuss why infrastructure metrics are the foundation of cost-efficiency. Learn how monitoring GPU Utilization and VRAM prevents OOM (Out of Memory) crashes, why CPU/Memory overhead matters for RAG pre-processing, and how Disk I/O impacts model loading and swapping speeds.

  • The User’s Performance Suite: For those building the apps, we shift focus to LLM-specific telemetry. We will break down why Time to First Token (TTFT) is the "golden metric" for UX, how TPOT (Time Per Output Token) ensures fluid reading speeds, and why tracking MCP (Model Context Protocol) tool requests is vital for debugging agentic loops.

  • Economics of Inference: Learn how to correlate token usage (Input, Output, and Cached) with infrastructure costs to determine the true ROI of every agent call.

  • Real-World Use Cases: Practical blueprints for Anomaly Detection (stuck agents), Capacity Planning (moving to dedicated GPU slices), and Cost Attribution via multi-tenant billing.

Whether you're an SRE looking to stabilize your AI stack or a developer aiming to optimize agentic latency, this talk provides the blueprint for a transparent, scalable, and cost-effective AI deployment.

When Events Become Decisions: Real-Time AI with Kafka, GlassFlow & ClickHouse®

Speaker: Sasi Teja, Open Source Community Catalyst @ Kafka & GlassFlow

Abstract: Modern SaaS companies generate massive streams of customer events, but turning those events into real-time decisions remains a challenge. In this talk, we’ll walk through a real-time AI analytics pipeline designed for Customer 360 use cases.

We’ll explore how event data flows through Kafka, is deduplicated and joined using GlassFlow, and lands in ClickHouse for fast analytics. On top of this pipeline, anomaly detection identifies unusual customer behavior in real time, and an LLM analyzes these anomalies to produce human-readable explanations and actionable insights for customer account teams.

Through a live proof-of-concept, we’ll demonstrate how streaming data and AI together can help SaaS companies detect issues earlier, understand customer behavior faster, and move from raw events to meaningful decisions.

Debugging AI Agents in Production

Speaker: Debabrata Panigrahi, Founding Engineer @ Parseable

Abstract: You deployed an AI agent. It ran fine for a week. Then support tickets started rolling in, wrong answers, phantom tool calls, tasks that silently stall. Your logs say everything is 200 OK. Your agent disagrees.

This is the observability gap in agentic AI. Traditional monitoring was designed for request-response services, not for non-deterministic, multi-step workflows where the same input produces wildly different execution paths. When an LLM is making decisions inside your system, "the server is healthy" tells you almost nothing.

In this talk, I'll take AI agents already running in production, auto-instrument them with OpenTelemetry in minutes, and use the resulting traces to observe failures as they happen. With traces flowing, we'll walk through the agent's execution step by step, pinpoint the root cause, fix it, and confirm the fix in production.

You'll walk away with:

  • A mental model for what "observability" actually means when an LLM is making decisions inside your system

  • Patterns for tracing agent tool calls, reasoning chains, and LLM interactions in production

  • A workflow to auto-instrument any agent with OpenTelemetry

⚠️ Altinity talk coming soon!

Location
Nutanix Technologies India Pvt Ltd
9th, MERCURY BLOCK, PRESTIGE TECH PARK, Marathahalli - Sarjapur Outer Ring Rd, Marathahalli, Kadubeesanahalli, Bengaluru, Bellandur Amanikere, Karnataka 560103, India
219 Going