Cover Image for First vLLM Korea Meetup with Red Hat and Rebellions
Cover Image for First vLLM Korea Meetup with Red Hat and Rebellions
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
4 Going

First vLLM Korea Meetup with Red Hat and Rebellions

Registration
Approval Required
Your registration is subject to approval by the host.
Welcome! To join the event, please register below.
About Event

We are excited to invite you to the inaugural vLLM meetup in Korea hosted by Red Hat and Rebellions in Seoul.

​This is your chance to connect with a growing community of vLLM users, developers, maintainers, and engineers from Red Hat. We'll dive deep into technical talks, share insights, and discuss our journey in optimizing LLM inference for performance and efficiency.

Venue

The meetup will be held at Aimed*, located on the 12th floor in the Majesta City Building 2, a 5-minute walk from Seocho Station Exit 4. There is no parking support.

https://place.map.kakao.com/612365628
https://naver.me/G2EOU4ZI

*Depending on the registration demand, we might change the meeting venue. Should that happen, we'll send out a notification.

Tentative Agenda

Registration opens at 6:00pm while sessions kick off at 6:30pm. For those joining us staright from work or school, we'll have light food and beverages available.

6:00-6:30: Registration and Networking
6:30-7:00: Intro to vLLM for Fast and Efficient AI Inference
7:00-7:30: High-Performance LLM Scaling with llm-d
7:30-8:00: Deep dive into vLLM TPU integration
8:00-8:30: Building and Testing Infrastructure for vLLM
8:30-8:40: Break
8:40-9:10: Supercharging Rebellions NPUs with vLLM
9:10-9:30: Closing Remarks, Q&A, and Social

In addition to Red Hat and Rebellions, thank you to teams from SqueezeBits and PyTorch KR who are contributing to this meetup!

Location
Majesta City Tower 2
12 Seocho-daero 38-gil, Seocho District, Seoul, South Korea
Aimed, Majesta City 12th Floor Seocho-daero 38-gil, Seocho-gu, Seoul, Republic of Korea
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
4 Going