Cover Image for How do you know if your org's AI is useful? Evaluating LLMs (w/ Parlance Labs)
Cover Image for How do you know if your org's AI is useful? Evaluating LLMs (w/ Parlance Labs)
Avatar for Tenex
Presented by
Tenex
Your AI transformation partner. Subscribe to stay up to date on Applied, our free digital event series. Hosted by Alex Lieberman & Arman Hezarkhani.
Hosted By

How do you know if your org's AI is useful? Evaluating LLMs (w/ Parlance Labs)

Virtual
Registration
Approval Required
Your registration is subject to host approval.
Welcome! To join the event, please register below.
About Event

RSVP FREE!

Large language models are shipping faster than teams can measure them. Most companies are still stuck in vibe-check mode—ship a prompt tweak, pray nothing breaks.

For this episode of Human in the Loop, we’re joined by a guest with two decades in machine learning, including early LLM research at Airbnb and GitHub (work that informed OpenAI’s code-understanding models). Hamel Husain, also the co-creator of AI Evals for Engineers & PMs, a course taught to 3,000+ builders across OpenAI, Anthropic, Google, Microsoft, and more.

Avatar for Tenex
Presented by
Tenex
Your AI transformation partner. Subscribe to stay up to date on Applied, our free digital event series. Hosted by Alex Lieberman & Arman Hezarkhani.
Hosted By