Cover Image for Inference Starts Here: Open Inference. Real Impact. One day.
Cover Image for Inference Starts Here: Open Inference. Real Impact. One day.
Avatar for The Open Accelerator
Luma calendar for TOA

Inference Starts Here: Open Inference. Real Impact. One day.

Register to See Address
Boston, Massachusetts
Registration
Approval Required
Your registration is subject to host approval.
Welcome! To join the event, please register below.
About Event

Join The Open Accelerator for a fast-paced, single-day hackathon focused on using an open-source AI inference stack! Come to see how models are served!

This isn't a tutorial—this is your chance to build on vLLM alongside core committers from Red Hat and IBM who will be right there as your mentors. Tackle real open-source issues and build production-viable applications across our five challenge tracks that focus on optimizing inference cost, creating a RAG pipeline, using speculative decoding, maximizing performance and bring your own problem.

We use a "one theme, three skill lanes" approach (Starter, Builder, and Deep Tech), so whether you're shipping your first model, vibe-coding a frontend with Cursor, or optimizing token throughput at the kernel level, there is a path for you. Vibe coding isn't just allowed—it's highly encouraged!

Registration includes access to shared GPU compute, pre-quantized LLM models, food, and a great collaborators.

Registration acceptances will be sent out the week of April 20th along with some pre-reads on the tooling, proposed challenges to conquer, and the precise location.

Teams of 3-5 will be formed onsite by those who show up to build!

Come for the tech, stay for the community. Let's see what you and your machine can do!

Location
Please register to see the exact location of this event.
Boston, Massachusetts
Avatar for The Open Accelerator
Luma calendar for TOA