

Tokyo vLLM Meetup (for Online Attendees)
Join us for the Tokyo vLLM Meetup!
We’re excited to invite you to the Tokyo vLLM meetup, hosted by IBM, Red Hat, and AMD on October 9th, 2025.
This is an entry for attendees who want to join via online. If you want to join on-site event, please use the original event entry.
This meetup brings together vLLM users, developers, maintainers, and engineers to explore the latest in optimized inference. Expect deep technical talks and plenty of time to connect with the community.
Agenda
5:00pm – Doors Open & Meet the vLLM Team
5:30pm – Opening Remarks by Tatsuhiro Chiba (IBM Research)
5:40pm – Intro to vLLM and Project Update by Mori Ohara (IBM Research)
6:10pm – Optimized Model Serving with vLLM V1 and ROCm by Kenshi Tachikawa (AMD)
6:40pm – Lies, Damned Lies and Benchmarks: Exploring LLM Inference Benchmarks for Long Context Workloads by Valentijn van de Beek (IBM Research)
7:10pm – Fine-Grained Dynamic Resource Allocation for Disaggregated Inference Serving in llm-d Sunyanan Choochotkaew (IBM Research)
7:40pm – Q&A and Discussion
8:00pm – Food, Refreshments, and Networking 🤝
Important Information
Registration Deadline: Registration closes 24 hours prior to the event. We will be unable to admit any attendees who are not registered.
We look forward to seeing you there!