

Programming LLMs with DSPy
Stop prompt hacking. Start engineering.
Most AI development today follows a broken pattern: write a prompt, test it manually, hope it works in production, and pray when it doesn't. When things break, you tweak the prompt and cross your fingers.
This isn't engineering—it's guesswork.
DSPy changes everything.
DSPy brings software engineering rigor to AI systems. Instead of fragile prompt strings, you write declarative contracts. Instead of manual testing, you build automated evaluation pipelines. Instead of hoping for improvement, you measure and optimize systematically.
What You'll Learn
The AI Engineering Loop — A systematic methodology for building, measuring, and improving AI systems
Declarative Contracts — Define what you want, not how to prompt for it
Composable Modules — Build complex AI systems from simple, reusable building blocks
Metrics-First Development — Measure before you optimize, optimize before you ship
Systematic Optimization — Move beyond trial-and-error to data-driven improvement
Who This Is For
✅ You're building AI-powered applications and want more control than prompt engineering offers
✅ You're frustrated with brittle prompts that break when you change models or inputs
✅ You want to measure AI system performance, not just hope it works
✅ You're moving from "deploy and pray" to systematic, measurable AI development
Prerequisites
Python proficiency (functions, classes, type hints)
Basic understanding of what LLMs are and do
Laptop with Python 3.10+ and an API key (OpenRouter, OpenAI, Anthropic, or Ollama)
No prior DSPy, prompt engineering, or ML experience required.
What You'll Walk Away With
Working notebooks covering each concept
Evaluation pipeline patterns you can adapt to your use cases
The confidence to ship AI systems knowing you can measure and improve them
Instructor: Daryl Roberts, Head of AI @ obney.ai