

AI Safety Thursday: The Limitations of Reinforcement Learning for LLMs in Achieving AI for Science
What are the fundamental bottlenecks in reinforcement learning for scientific AI?
LLMs combined with Reinforcement Learning (RL) have unlocked impressive new capabilities. But do we simply need more scaling to reach the next step: AI for science and research? If not, what are the limitations, and what else is required?
In this talk, Yongjin Yang will share research on three fundamental bottlenecks of reinforcement learning for LLMs: skewed queries, limited exploration, and sparse reward signals. We'll also discuss potential solutions, and safety concerns from advancing RL toward AI for science.
Event Schedule
6:00 to 6:30 - Food and introductions
6:30 to 7:30 - Presentation and Q&A
7:30 to 9:00 - Open Discussions
If you can't attend in person, join our live stream starting at 6:30 pm via this link.
This is part of our weekly AI Safety Thursdays series. Join us in examining questions like:
How do we ensure AI systems are aligned with human interests?
How do we measure and mitigate potential risks from advanced AI systems?
What does safer AI development look like?