

Presented by
Arize AI
Generative AI-focused workshops, hackathons, and more. Come build with us!
Hosted By
8 Going
LLM-as-a-Judge Evals: Role of COT Reasoning & Explanations
Registration
About Event
In all LLM-as-a-judge evaluations, two choices shape how reliable and useful the results are: whether you ask for explanations and whether you add chain of thought (CoT) prompting.
In this session, Elizabeth Hutton (Senior AI Engineer - Evals) and Sri Chavali (AI Engineer) will look at whether explanation order matters, when CoT is genuinely helpful, and why clear, well-structured prompts often outperform extra reasoning steps.
Join to learn practical strategies for making LLM evaluations more accurate, transparent, and scalable.
Presented by
Arize AI
Generative AI-focused workshops, hackathons, and more. Come build with us!
Hosted By
8 Going