Cover Image for Tokyo vLLM Meetup
Cover Image for Tokyo vLLM Meetup
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
63 Went
Registration
Past Event
Please click on the button below to join the waitlist. You will be notified if additional spots become available.
About Event

Join us for the Tokyo vLLM Meetup!

We’re excited to invite you to the Tokyo vLLM meetup, hosted by IBM, Red Hat, and AMD on October 9th, 2025.

for the attendees who want to join via online: Please register https://luma.com/xqkv2zqa here instead of this entry.

This meetup brings together vLLM users, developers, maintainers, and engineers to explore the latest in optimized inference. Expect deep technical talks and plenty of time to connect with the community.

Agenda

5:00pm – Doors Open & Meet the vLLM Team
5:30pm – Opening Remarks by Tatsuhiro Chiba (IBM Research)
5:40pm – Intro to vLLM and Project Update by Mori Ohara (IBM Research)
6:10pm – Optimized Model Serving with vLLM V1 and ROCm by Kenshi Tachikawa (AMD)
6:40pm – Lies, Damned Lies and Benchmarks: Exploring LLM Inference Benchmarks for Long Context Workloads by Valentijn van de Beek (IBM Research)
7:10pm – Fine-Grained Dynamic Resource Allocation for Disaggregated Inference Serving in llm-d Sunyanan Choochotkaew (IBM Research)
7:40pm – Q&A and Discussion
8:00pm – Food, Refreshments, and Networking 🤝

Important Information

Registration Deadline: Registration closes 24 hours prior to the event. We will be unable to admit any attendees who are not registered.

We look forward to seeing you there!

Location
IBM Japan
19-21 Nihonbashihakozakichō, Chuo City, Tokyo 103-8510, Japan
17F (please come to the office reception at 1F)
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
63 Went