

Ray Mini Summit: Scaling Multi-Modal AI Pipelines
Join an exclusive Event with industry leaders and technology executives as we explore the architectural foundations required to deploy AI at scale. As AI moves from experimentation to production, the bottlenecks are shifting from model selection to data processing and inference optimization.
This intimate session brings together forward-thinking leaders to exchange insights on building robust data pipelines capable of handling diverse data types, and designing inference architectures that balance latency, throughput, and cost.
Key Discussion Topics
Multi-Modal Data Processing: How to ingest, process, and unify text, image, and structured data into a single AI pipeline.
Distributed Training: Scaling model training across clusters using Ray and Anyscale, from fine-tuning to full training runs.
Deploying on AWS: Running Anyscale on AWS. Choosing between EC2, EKS, HyperPod, Spot instances, and Graviton, and understanding the setup process.
Resource Efficiency at Scale: Using Ray and fractional GPUs to get more out of your available compute.
Who Should Attend Technical Leaders, CTOs, Lead Solution Architects, Platform Owners, Product Owners
Agenda
12:00 PM - 1:00 PM | Arrivals & Welcome Lunch
Arrive, check in, and grab lunch while meeting fellow attendees and speakers.
1:00 PM - 1:30 PM | Welcome & Opening Remarks
A short introduction setting the stage for the day: where enterprise AI stands today and what it actually takes to move from proof-of-concept to production. Robert Nishihara, Co-Founder, Anyscale (Ray).
1:30 PM - 2:15 PM | Session 1: Multi-Modal Data Processing for AI Pipelines
Most AI applications need to work with more than just text. This session covers a practical approach to building pipelines that ingest, process, and combine text, image, audio, and structured enterprise data.
2:15 PM - 2:45 PM | Customer Spotlight: BMW Connected AI
Fine-tuning and serving LLMs for connected vehicles with Ray.
2:45 PM - 3:30 PM | Session 2: Distributed Training with Ray & Anyscale
Training large models on a single machine only gets you so far. This session covers how to scale training across clusters using Ray and Anyscale, including distributed fine-tuning, handling checkpoints, and managing training jobs efficiently.
3:30 PM - 4:15 PM | Session 3: Overview: Ray & Anyscale on AWS
A walkthrough of the options and trade-offs for running Anyscale on AWS. We'll cover EC2, EKS, HyperPod, Spot instances, and Graviton: when to use each, how they compare on cost and complexity, and what the deployment process looks like end to end.
4:15 PM - 4:30 PM | Conclusion
4:30 PM | Dinner & Networking
Continue the conversation over dinner. Connect with peers working through the same challenges in their own organizations.
7:00 PM | Event Concludes