

AI.SEA Co-Labs: How does Fine-Tuning Actually Work (and when should you bother?)
How does fine-tuning actually work (and when should you bother)?
Most builders have a vague sense that fine-tuning exists. Fewer know what it actually does to a model. Even fewer know when to reach for it versus just prompting better, adding RAG, or switching models entirely.
This session fixes that.
We'll walk the full spectrum — from in-context learning to LoRA to full fine-tuning — with one question at every stop: what are you actually changing, and toward what? By the end you'll have a mental model sharp enough to make the call yourself.
As always, guided discussion. No passive listening.
What we'll cover
— Why the pretrained model already has opinions before you touch it
— The fine-tuning spectrum: what changes, what it costs, and when each approach makes sense
— LoRA, QLoRA, and why low-rank approximations work better than they should
— Loss functions, SFT, and DPO
— what you're actually optimising for and how you'd know if it's wrong
— Hands-on: fine-tuning a small model + comparing base vs fine-tuned outputs
Who this is for
Builders who've shipped something with LLMs and want to go deeper. Some technical depth assumed — we won't be explaining what a token is.
Format Guided discussion + hands-on practical
Venue: Co-labs Coworking, KL Sentral
Organized by AI.SEA