

Automated Prompt Optimization with Evidently AI
Improving LLM prompts using data-driven feedback optimization - Mikhail Sveshnikov
Outline:
Overview of prompt optimization challenges and common approaches (manual iteration, few-shot learning, etc.)
How Evidently AI's prompt optimization works: using feedback from mistakes on real data to iteratively improve prompts
Live demonstration: optimizing a classification prompt step-by-step, showing how errors are identified and used to refine the prompt
Q&A and discussion: best practices, when to use different strategies, and practical consideration
About the Speaker:
Mikhail Sveshnikov is an AI engineer at Evidently AI with 10+ years in ML and MLOps, focused on building developer tools for reliable and measurable AI in production.
DataTalks.Club is the place to talk about data. Join our slack community!