Cover Image for Fine Tuning and Evaluation for Open-Source LLMs
Cover Image for Fine Tuning and Evaluation for Open-Source LLMs
Avatar for Fireworks AI
Presented by
Fireworks AI

Fine Tuning and Evaluation for Open-Source LLMs

Register to See Address
New York, New York
Registration
Welcome! To join the event, please register below.
About Event

Join us on February 11th for a hands-on, engineering-first workshop focused on how teams fine-tune and evaluate open-source LLMs in real production systems.

This session is designed for AI engineers and tech leads at startups and enterprises who are already building with LLMs and want to move beyond prompting. We’ll cover when supervised fine-tuning makes sense, when reinforcement fine-tuning actually helps, and how strong evaluation loops keep systems reliable as they scale. You’ll work through a live, hands-on workshop that mirrors how modern teams design training and evaluation pipelines for real use cases, with an emphasis on practical trade-offs, iteration speed, and measurable model improvements.

You’ll walk away with:

  • Clear guidance on SFT vs reinforcement fine-tuning and how to choose between them

  • Practical evaluation strategies used in production environments

  • Hands-on experience building a fine-tuning and eval loop end to end

  • A stronger framework for shipping and maintaining high-quality models

Agenda:

  • 4:00 – 4:30 PM: Welcome & Networking

  • 4:30 – 5:15 PM: Fine-Tuning in Practice (SFT, RFT, trade-offs)

  • 5:15 – 6:30 PM: Hands-On Workshop: Fine-Tuning and Evaluating Open-Source LLMs

  • 6:30 – 7:00 PM: Wrap-up & Networking

Food, drinks, and snacks will be provided.

Instructor: Aishwarya Srinivasan, Head of DevRel, Fireworks AI

Location
Please register to see the exact location of this event.
New York, New York
Avatar for Fireworks AI
Presented by
Fireworks AI