Cover Image for Measuring and Improving Long-Horizon Reasoning Capabilities
Cover Image for Measuring and Improving Long-Horizon Reasoning Capabilities
Hosted By
1 Going

Measuring and Improving Long-Horizon Reasoning Capabilities

Hosted by alphaXiv
Zoom
Registration
Welcome! To join the event, please register below.
About Event

About Event

​​​​​​🔬 AI4Science on alphaXiv
🗓 Friday May 15th 2026 · 11 AM PT
🎙 Featuring Sumeet Motwani and Charles London
💬 Casual Talk + Open Discussion

​​​🎥 Zoom: Upon Registration

Description: As language models are increasingly deployed for complex autonomous tasks, their ability to reason accurately over longer horizons becomes critical. An essential component of this ability is planning and managing a long, complex chain-of-thought (CoT). We introduce LongCoT, a scalable benchmark of 2,500 expert-designed problems spanning chemistry, mathematics, computer science, chess, and logic to isolate and directly measure the long-horizon CoT reasoning capabilities of frontier models. Problems consist of a short input with a verifiable answer; solving them requires navigating a graph of interdependent steps that span tens to hundreds of thousands of reasoning tokens. Each local step is individually tractable for frontier models, so failures reflect long-horizon reasoning limitations. At release, the best models achieve <10% accuracy (GPT 5.2: 9.8%; Gemini 3 Pro: 6.1%) on LongCoT, revealing a substantial gap in current capabilities. Overall, LongCoT provides a rigorous measure of long-horizon reasoning, tracking the ability of frontier models to reason reliably over extended periods

Check out the full papers below!

h1: alphaxiv.org/abs/2510.07312 for long-horizon training

LongCoT: alphaxiv.org/abs/2604.14140 to benchmark long-horizon reasoning capabilities

​​​​​​Whether you’re working on the frontier of LLMs or just curious about anything AI4Science, we’d love to have you there.

​​​​Hosted by alphaXiv

Hosted By
1 Going