Cover Image for Community Paper Reading: Why Language Models Hallucinate
Cover Image for Community Paper Reading: Why Language Models Hallucinate
Avatar for Arize AI
Presented by
Arize AI
Generative AI-focused workshops, hackathons, and more. Come build with us!
120 Going

Community Paper Reading: Why Language Models Hallucinate

Zoom
Registration
Welcome! To join the event, please register below.
About Event

Join our upcoming community paper reading, where we'll dive into the paper published by OpenAI: "Why Language Models Hallucinate."

We're thrilled to host one of the paper's authors — Santosh S. Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science, Georgia Tech — who will walk us through the research and its implications. If time permits, there will be a live Q&A session, so bring your questions!

The paper argues that hallucinations persist due to the way most evaluations are graded—language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This “epidemic” of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations.

Read the paper: https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf

Avatar for Arize AI
Presented by
Arize AI
Generative AI-focused workshops, hackathons, and more. Come build with us!
120 Going