Cover Image for Fine Tuning and Evaluation for Open-Source LLMs
Cover Image for Fine Tuning and Evaluation for Open-Source LLMs
Avatar for Fireworks AI
Presented by
Fireworks AI

Fine Tuning and Evaluation for Open-Source LLMs

Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Join us on February 11th for a hands-on, engineering-first Join us on February 11th for a hands-on, engineering-first workshop focused on how teams fine-tune and evaluate open-source LLMs in real production systems.

This session is designed for AI engineers and tech leads at startups and enterprises who are already building with LLMs and want to move beyond prompting. We’ll cover when supervised fine-tuning makes sense, when reinforcement fine-tuning actually helps, and how strong evaluation loops keep systems reliable as they scale. You’ll work through a live, hands-on workshop that mirrors how modern teams design training and evaluation pipelines for real use cases, with an emphasis on practical trade-offs, iteration speed, and measurable model improvements.

You’ll walk away with:

  • Clear guidance on SFT vs reinforcement fine-tuning and how to choose between them

  • Practical evaluation strategies used in production environments

  • Hands-on experience building a fine-tuning and eval loop end to end

  • A stronger framework for shipping and maintaining high-quality models

  • Top use-cases with latest Kimi K2.5 model


Agenda:

  • 4:00 – 4:15 PM: Fine-Tuning in Practice (SFT, RFT, trade-offs)

  • 4:15 – 5:30 PM: Workshop: Fine-Tuning and Evaluating Open-Source LLMs

_____

Instructor: Aishwarya Srinivasan, Head of DevRel, Fireworks AI

Avatar for Fireworks AI
Presented by
Fireworks AI