Cover Image for RLMs: Recursive Language Models
Cover Image for RLMs: Recursive Language Models
Avatar for AIM Workshops & Events
Join us every week live to learn concepts and code from The LLM Edge!
Hosted By
29 Went

RLMs: Recursive Language Models

YouTube
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Recursive Language Models, or RLMs, are being hailed as the paradigm of 2026.

Why?

TL;DR, they are how we plan to manage extremely long contexts

In short, we will need them for context engineering.

From the original paper by Zhang et al.,

We find that RLMs successfully handle inputs up to two orders of magnitude beyond model context windows and, even for shorter prompts, dramatically outperform the quality of base LLMs and common long-context scaffolds across four diverse long-context tasks, while having comparable (or cheaper) cost per query. [Ref]

The paper calls RLMs a “general inference strategy that treats long prompts as part of an external environment and allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the code.”

This development arises from the problem space of context engineering.

There are many tradeoffs to manage. For instance, cost increases with context length, but performance decreases (context rot).

A typical coding agent's approach to solving this problem is to summarize information successively.

Another approach is to avoid summarization by keeping the entire context generated in one place, while actually managing only what gets put into the context window in another. This is called “context folding.”

There are many papers that have discussed this, but it is thought that RLMs are the “simplest, most flexible method for context folding.”

To learn why this is the case, and to deep dive into the history, surrounding research, core constructs, and big ideas surrounding RLMs, join us live!

🤓 Who should attend

  • AI Engineers, Data Scientists, MLEs, and Researchers interested in training and tuning LLMs and building agentic systems.

  • AI Engineers and leaders looking to understand the latest and greatest ways to approach building state-of-the-art agentic systems in 2026

Speaker Bios

  • Dr. Greg” Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for The AI Engineering Bootcamp, the longest-running AI Engineering bootcamp on Maven. Since 2021, he has built and led industry-leading Machine Learning education programs.  Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher.  He loves trail running and is based in Columbus, Ohio.

  • Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for The AI Engineering Bootcamp, the longest-running AI Engineering bootcamp on Maven. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator whose motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.

Follow AI Makerspace on LinkedIn and YouTube to stay updated about workshops, new courses, and corporate training opportunities.

Avatar for AIM Workshops & Events
Join us every week live to learn concepts and code from The LLM Edge!
Hosted By
29 Went