Cover Image for Munich vLLM Meetup
Cover Image for Munich vLLM Meetup
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
123 Going
Registration
Approval Required
Your registration is subject to host approval.
Welcome! To join the event, please register below.
About Event

Join Us for the vLLM Inference Meetup in Munich!

Hosted by Red Hat, AMD, Mistral AI, and CROZ, this event takes place on 24 February 2026 in Munich, Germany, and brings together vLLM users, developers, and AI engineers to explore the latest in GenAI inference.

​Learn from the vLLM Team

​Hear directly from leading vLLM committers, contributors, and users shaping the project’s roadmap and building its most advanced features. Expect deep technical talks, live demos, and plenty of time to connect with the community.

Join the Pre-Event Hands-On Workshop: Building AI Agents with vLLM, MCP & AMD GPUs

Our official vLLM meetup begins at 17:00 (see agenda below). Before the main session, join Red Hat and AMD at 16:00 for a beginner to intermediate level, instructor-led, hands-on GPU workshop where you will learn how to set up an OpenAI compatible endpoint to serve multiple models concurrently using vLLM and build multi-agent applications to deliver real-world applications. The workshop culminates with a short application development challenge. Doors open at 15:30 for the workshop. Space is limited — indicate your interest by selecting the workshop option during registration. Additionally, please join the AMD AI Developer Program and create your account to access AMD GPU: AMD AI Developer Program.

vLLM Meetup Agenda (Subject to Change & More Awesomeness)

15:30 — Doors Open Hands-On Workshop

16:00 – 17:00 — Hands-On Workshop by AMD and Red Hat

17:00 – 17:30 — Door Open & vLLM Meetup Registration

17:30 – 17:40Welcome & Opening Remarks

Christopher Nuland, Principal TMM, Red Hat AI
Detlev Knierim, AI Sales Lead, Red Hat AI

17:40 – 18:00Intro to vLLM and vLLM Project Update

Nicolò Lucchesi, vLLM Committer & Sr. Software Engineer, Red Hat AI

18:00 – 18:20Launching vLLM into Production: Benchmarks, Optimizations, and Lessons Learned

Petar Zrinscak, AI Consultant, CROZ

18:20 – 18:40 vLLM Inference Optimization on AMD GPUs

Amanzhol Salykov, Sr. Member of Technical Staff, AMD

18:40 – 19:00Mistral AI & vLLM

Xuanyu Zhang, Software Engineer, Mistral AI
Patryk Saffer, Inference Engineer, Mistral AI

19:00 – 19:20Scaling LLM Inference on Kubernetes: Fast, Cost-Efficient, Production-Ready with vLLM

Christopher Nuland, Principal TMM, Red Hat AI

19:20 – 21:00 — Networking, Food & Drinks

​Important Information

Registration Deadline: Registration closes 24 hours prior to the event. We will be unable to admit any attendees who are not registered.

Check-In: Please bring a photo ID to verify your registration upon arrival.

​We look forward to seeing you there!

Location
Aschauer Str. 30
81549 München, Germany
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
123 Going