

90/30 Club (ML reading) #26: Tiny Recursive Model
Week 26: Tiny Recursive Model
Less is More: Recursive Reasoning with Tiny Networks
This paper introduces Tiny Recursive Models (TRM), which drastically simplifies previous hierarchical reasoning approaches while achieving better performance on puzzle tasks like Sudoku and ARC-AGI. Using just a single 2-layer network with 7M parameters, TRM outperforms much larger language models by recursively refining both its current answer and internal reasoning state.
The work challenges conventional scaling wisdom by showing that smaller networks with deeper recursion can outperform larger models, offering a parameter-efficient approach to complex reasoning tasks that's particularly relevant for practitioners interested in efficient architectures and the role of iterative refinement in problem-solving.
Join us at Mox to explore:
- Artificial reasoning - Think again vs think bigger?
- What if we've been scaling AI models in completely the wrong direction?
Discussion at 20:00, (optional) quiet reading from 19:00.