Cover Image for Scaling AI the Cloud Native Way
Cover Image for Scaling AI the Cloud Native Way
59 Going
Registration
Welcome! To join the event, please register below.
About Event

Bay Area tech friends — dive into AI, GenAI, zero-CVE security, and Kubernetes history over free food, drinks, swag, and plenty of time for networking and community discussion.

Join the open source community meetup for an evening of deep technical talks hosted by Nutanix, PingCap, and RapidFort. Explore Private AI Evaluation in the Enterprise, accelerating GenAI apps with cloud-native databases, automating zero-CVE platforms, and understanding Kubernetes cluster history through real-world demos and practical insights.

​Location: 1740 Technology Drive San Jose, CA 95110 (Nutanix office - Room 120 Cranium)

​Agenda (Pacific Time): 

  • ​​​​5:30 - 6:00 pm (Check-in): Networking, Food and Drinks

  • ​​6:00 - 6:30 pm (Talk 1): Private AI Evaluation in the Enterprise, Nutanix

  • ​​​6:30 - 7:00 pm (Talk 2): Accelerating GenAI Applications Development with A Cloud Native Database, PingCap  

  • ​​​7:00 - 7:30 pm (Talk 3): Beyond the Patching Treadmill: Automating Zero-CVE Platforms with Runtime Intelligence, RapidFort

  • ​7:30 - 8pm (Talk 4): Kubernetes Cluster History: The Why, The Where, and The How, Nutanix  

  • 8:00 - 8:15pm: Wrap-up  

​Talk #1: Private AI Evaluation in the Enterprise

​Speakers: Neelabh Sinha, Kristen Pereira and Dheeraj Akula, Nutanix 

​Abstract:

As enterprises move from experimental use of large language models to production-grade agentic systems—--spanning customer support, root-cause analysis, and coding workflows—--the question is no longer which model performs best on public benchmarks, but which model can be trusted with enterprise data. Agentic architectures require deep access to proprietary source code, internal APIs, operational telemetry, and customer information, making conventional evaluation approaches both insufficient and risky. Public benchmarks fail to reflect enterprise-specific compliance constraints and real-world failure modes, while third-party evaluation platforms introduce unacceptable risks of intellectual property leakage, PII/PHI exposure, and Shadow AI.

In this talk, we present an approach to Private Evaluation Framework: a fully self-hosted, end-to-end evaluation pipeline that keeps the entire data → inference → scoring → leaderboard lifecycle within a controlled infrastructure. We will walk through how curated, auditable datasets, isolated multi-model inference, and proprietary scoring enable principled, repeatable comparison of foundation models against internal legal, security, and reliability standards. The result is an enterprise-specific leaderboard that informs production model selection for agentic systems—without compromising data sovereignty, compliance, or trust.

Talk #2: Accelerating GenAI applications development with a cloud native database

​Speakers: Christopher Hofmann, Pingcap 

​Abstract:

Development of GenAI applications requires a robust and scalable data infrastructure. Today, the demands of AI agents and the explosion of data are forcing developers and architects to confront scalability issues sooner than ever. 

We will discuss the complexity of managing multiple databases for transactions, analytics, vector search, and semantic search. You will learn how TiDB, a cloud-native distributed SQL database, can serve as a unifying, AI agent-friendly database with elastic scalability and a pay-for-what-you-query model.

The presentation will conclude with a live demo of an AI application built using TiDB and Amazon Bedrock AgentCore.

Talk #3: Beyond the Patching Treadmill: Automating Zero-CVE Platforms with Runtime Intelligence

Speakers: Russ Andersson, George Manuelian, RapidFort

Across regulated and enterprise environments, Kubernetes must be secure-by-default and audit-ready—but platform teams are stuck on a relentless patching treadmill. Static scanning floods teams with thousands of CVEs, most tied to unused packages that never run in production, creating months-long remediation cycles and constant compliance pressure.

This session introduces a runtime-informed security model that flips the script. By profiling real application behavior, teams can automatically build hardened, minimal images that include only what actually executes—dramatically reducing attack surface while supporting frameworks like FedRAMP, SOC 2, and PCI DSS. We’ll also tackle the Day 2 problem: detecting new CVEs at runtime and delivering GitOps-driven hotfixes in hours, not months. Using open-source tools such as Flux and Kimia, we’ll show how to maintain a near-zero CVE posture with real-time dashboards that map vulnerabilities to running processes, enabling evidence-based prioritization for both auditors and platform teams.

Attendees will learn how to:

  • Cut through CVE noise using runtime profiling

  • Automate hardened, compliance-ready images in CI/CD

  • Deliver rapid security hotfixes with GitOps

  • Maintain continuous, real-time security and compliance visibility

Talk #4: Kubernetes Cluster History: The Why, The Where, and The How

​Speakers: Daniel Lipovetsky, Nutanix 

​Abstract:

As Kubernetes runs cloud-native and AI workloads—including model serving and inference—understanding its evolution over time is critical, since even small configuration or access changes can affect reliability, performance, and governance.

Kubernetes is a complex, distributed, and eventually consistent system (despite a strongly consistent control plane). Our standard tools show us a snapshot--a specific moment in time--of the system, but that fails to give a complete picture of the system, and without that picture, we struggle to respond when the system breaks or misbehaves. 

Although we only see a snapshot of it, every cluster has a history. But where is it? Some of it, of course, can be found in Pod logs. But those give a biased view, only showing what is relevant to one application. Some of it is found in Events, ephemeral resources that record important messages. But most of the history, unbiased and unabridged, is found in the changes to the cluster resources themselves, and those changes are tracked in the Kubernetes Audit Logs.

This session will cover how to ensure that the Audit Logs capture the history that is important to us, and will demonstrate tools that help us visualize and interpret this history—enabling deeper operational insight across both traditional and AI-driven workloads.

Location
1740 Technology Dr (Room: 120-Cranium)
59 Going