Cover Image for Israel vLLM Meetup
Cover Image for Israel vLLM Meetup
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
212 Going
Registration
Event Full
If you’d like, you can join the waitlist.
Please click on the button below to join the waitlist. You will be notified if additional spots become available.
About Event

Join Us for the vLLM Inference Meetup in IBM Givatayim, Israel!

Hosted by Red Hat, IBM, and NVIDIA, this event takes place on 14 January 2026 in IBM Givatayim and brings together vLLM users, developers, and AI engineers to explore the latest in GenAI inference.

​Learn from the vLLM Team

​Hear directly from leading vLLM committers and users shaping the project’s roadmap and building its most advanced features. Expect deep technical talks, live demos, and plenty of time to connect with the community.

vLLM Meetup Agenda (Subject to Change & More Awesomeness)

17:00 – 17:30 — Doors Open, Snacks & Drinks

17:30 – 17:45 — Welcome & Opening Remarks

17:45 – 18:20 — vLLM Keynote: Intro to vLLM and Project Update

Thomas Parnell, vLLM Committer & Principal Research Scientist, IBM Research Zurich

18:20 – 18:50Distributed Inference with llm-d

Vita Bortnikov, IBM Fellow & Senior Manager, IBM Research Israel

18:50 – 19:20 — Optimizing Nemotrons: Reaching Roofline Performance with vLLM

Tomer Asida, Sr. LLM Inference Software Engineer, NVIDIA

19:20 – 20:00 — Startup Panel: Building the AI infrastructure of the future

Kfir Wolfson, Principal System Architect, Pliops

Boaz Touitou, CTO, Impala

Roy Nissim, CEO, Jounce (now Red Hat)

Alon Yariv, CEO, Atero (now Crusoe)

20:00 – 20:30 — Networking, Food & Drinks

​Important Information

Registration Deadline: Registration closes 24 hours prior to the event. We will be unable to admit any attendees who are not registered.

Check-In: Please bring a photo ID to verify your registration upon arrival.

​We look forward to seeing you there!

Location
Hashahar tower
Ariel Sharon St 4, Giv'atayim, 5320047, Israel
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
212 Going