Recursive Language Models w/ Alex Zhang
About Event
🔬 AI4Science on alphaXiv
🗓 Friday October 31st 2025 · 11AM PT
🎙 Featuring Alex Zhang
💬 Casual Talk + Open Discussion
🎥 Zoom: https://stanford.zoom.us/j/98514289898?pwd=Uw8AOTWKZMBamkc3aWN41Tde9n1Rt1.1&from=addon
Description: Handling arbitrarily long contexts is an open problem for language model systems. In this talk, we propose and discuss the Recursive Language Models (RLM) paradigm for inference scaling on language models to handle near infinite contexts. We cover our most recent writeup and results (https://alexzhang13.github.io/blog/2025/rlm/), and introduce a few more insights beyond the initial blogpost.
Whether you’re working on the frontier of LLMs or just curious about anything AI4Science, we’d love to have you there.
