

Fine-Tune Gemma 4 on Your Data (LoRA Hands-On) - Open Registration
Bring a laptop. Bring your data. Leave with a model that knows it.
Gemma 4 dropped on March 31. Apache 2.0, fully open weights, runs on a MacBook. We're going to spend two hours teaching it to do something specific, on data you actually care about.
This is a hands-on workshop on LoRA fine-tuning. No theory dump, no slide deck. You'll set up a training run on a real H200, watch the loss curve, eval the result, and walk out with a custom adapter you can host or ship.
What you'll do
Pick a task. A tone (your Slack voice, your support replies, your code-review comments). A skill (classifying tickets, drafting briefs, answering domain questions). A persona. Anything you have data for.
We'll prep the dataset, configure the LoRA, kick off training on Nebius AI Cloud H200 GPUs, then evaluate against the base model. Expect to see the difference.
What you get
A shared $100 pool of Nebius AI Cloud credits to power H200 GPU training in the room. Split across 20 attendees that's ~85 minutes of H200 time each, enough to run a real fine-tune live.
$50 in Nebius Token Factory inference credits on your own account for testing your finetune against the base model.
The full workshop repo, including data prep scripts, training config, and eval harness.
A working LoRA adapter on a model you choose: Gemma 4 E2B (5GB, runs on a laptop), E4B (8GB), or 26B-A4B (the MoE).
Who should come
You've shipped something with an LLM. You've hit the wall where prompting alone isn't enough. You want to own the model, not rent it.
Whether you've never trained a model or you've shipped a dozen, the workflow is the same. We'll calibrate to the room.
Logistics
Tuesday, May 5 at 7:00 PM
Frontier Tower, Floor 10 (the Immersive Commons floor)
Bring your laptop, your data (or use ours), and a Hugging Face account
Snacks and water on the house, courtesy of our friends at Nebius
Hosts and sponsors
Hosted by the Immersive Commons Applied AI team. Lead: Rayyan Zahid. Co-speaker: Eric Mockler.
Sponsored by Nebius. GPU credits, Token Factory inference, and on-the-ground support provided by Nebius AI Cloud, the cloud built for AI. Co-host: Colin (Nebius DevRel).
Gemma 4 is an open-weight model family from Google DeepMind. Full Apache 2.0 license, available on Hugging Face. This workshop is independently produced and is not sponsored or endorsed by Google.
---
Limited to 25 attendees. Doors open 6:45.