

How to build your own SLM
Large language models are powerful, but they’re expensive, slow, and overkill for most real-world products.
The next wave of AI isn’t just about using GPT-4 class models.
It’s about building your own Small Language Model (SLM) that’s optimized for your specific use case.
In this session, we’ll break down:
• What an SLM actually is (and when you should build one)
• How to choose a base model (LLaMA, Mistral, Gemma, Phi, etc.)
• How to collect and structure high-signal domain data
• Fine-tuning
• Quantization + deployment strategies
• When to route to a larger model vs. keep everything small
• Cost optimization and production tradeoffs
We’ll go beyond theory and focus on practical architecture decisions for founders and engineers building AI products today.