

Train and Deploy Your Own SLM 2.0
Workshop: Train and Deploy Your Own Small-Language-Model (SLM) with distil labs
Join us for an interactive workshop with Distil labs where you’ll learn to train and deploy your own small-language model (SLM) using their cutting-edge platform, and how to deploy it in a local RAG system.
This event will cover roughly the same information as V1, apart from the RAG section. If you joined us for V1, this won't teach you many new things.
What you’ll need:
This is a technical workshop, so make sure you meet the minimum requirements of technical understanding.
A laptop
Familiarity with Jupyter notebooks and python code
Base understanding of Large Language Models
Base understanding of RAG systems
We recommend joining only if you can answer most of the following questions:
What is a token?
What does attention do in an LLM?
What does recall mean for a RAG system?
Why do RAG systems often use reranking?
There are not hard requirement, but help make sure you are able to follow the workshop properly.
Agenda:
1. Introduction: Brief overview of distil labs and their methodology
2. Hands-On Training: Guided session on training an SLM for question answering
3. Deployment Walkthrough: Step-by-step guide to deploying an SLM in a RAG system on your laptop.
This workshop is perfect for anyone eager to explore practical applications of NLP. See you there!
Make sure to also join the MLOps Community Slack at https://join.slack.com/t/mlops-community/shared_invite/zt-36q0g9r83-qZLH7z2UA8~auwhfO7x1ZA
LIMITED SEATS AVAILABLE: Register now