

F2025 - Technical Paper Reading Group Week 5 - Scalable Oversight
UBC AI Safety Technical Paper Reading Group
UBC AI Safety has launched a biweekly technical paper reading group focused on cutting-edge AI safety research.
Sessions will engage with recent papers across topics including mechanistic interpretability, AI control, scalable oversight, capability evaluation, and failure mode identification. The group emphasizes critical analysis and discussion.
Session 5: Scalable Oversight
Location: IKB 194
Session Plan
This session looks at scalable oversight: weak evaluators supervising models smarter than themselves. This encompasses techniques like debate, recursive reward modeling, and various amplification schemes. The central challenge is that more capable models may be able to deceive their overseers. We'll critically evaluate a variety of current approaches and discuss their theoretical foundations. Dinner will be provided!
Prereading:
This primer from Adam Jones (BlueDot) is a quick read and covers everything you need to know for this session. We highly recommended reading it before attending if you are able.
Who Should Attend:
Meetings are open to anyone interested in technical AI safety research. While no prior experience is required, participants with working knowledge of AI Safety and machine learning concepts will get the most out of discussions. If you're unsure whether you have sufficient background, check out this preparation document which gives resources on topics you should be familiar with for maximum engagement with the material.