Cover Image for AI Research Circle [members and +1s]
Cover Image for AI Research Circle [members and +1s]
Avatar for Rally SF
Presented by
Rally SF
San Francisco events worth showing up for.
27 Went

AI Research Circle [members and +1s]

Registration
Past Event
Welcome! To join the event, please register below.
About Event

About the AI Research Circle

The AI Research Circle is a community gathering at where we explore AI research together. No research background required—just curiosity.

Each session, we pick a topic, break it down, and open it up for discussion. The goal: make cutting-edge ideas accessible and spark conversation across disciplines.

Session Details

Session Theme: Memory & Long-Context LLMs (and when RAG still wins)

LLMs can now accept 100k+ token context windows. But “accepting” tokens isn’t the same as using them well: attention gets expensive, models can miss information in the middle, and performance can degrade past training length.   

This session is a practical survey of how long-context systems work (RoPE scaling, long-context fine-tuning, sparse attention, recurrence/state-space approaches), plus an engineer’s discussion of the real tradeoff: when to rely on a bigger window vs. when retrieval (RAG) is the better tool.

Reading

What we’ll cover:

  • Why long context is expensive + fragile (quadratic attention, extrapolation limits) 

  • What breaks in practice (lost-in-the-middle, needle-in-haystack)   

  • A map of the solution space: positional scaling, efficient fine-tuning (LongLoRA), architecture shifts (Transformer-XL → Mamba/SSMs)   

  • Where RAG fits (external memory) and hybrid patterns people actually ship 

Who should join:

Anyone building with LLMs who’s asked:
“Should I just increase context?”
“When does RAG actually help?”
“What breaks first?”

Location
550 Laguna St, San Francisco + Full Studio
Avatar for Rally SF
Presented by
Rally SF
San Francisco events worth showing up for.
27 Went