

Agents in Academia Meetup (AI Memory)
The "Agents in Academia" meetup aims to connect PhD students, faculty, and alumni with people working in or interested in agents in industry. We feature a variety of talks from academia relevant to people working on agents. This meetup will be around the theme of AI/agent memory.
Space is limited, so please make sure to register or cancel your registration if you can no longer make it.
Talks
Talk #1: Learning to Use External Memory for Long-Context LLMs - Dacheng Li (UC Berkeley)
Length-extrapolation based methods effectively extend an LLM's context window. However, recent deployment of LLMs require them to continuously acquire information and reuse experience, rendering a fixed bound inadequate. We explore external-memory–augmented LLMs trained with reinforcement learning (RL). First, we outline a taxonomy of external-memory approaches and implement a simple yet general scaffold that extends a pre-trained LLM with an unbounded external memory exposing a summary-based put and an agentic get. Second, we post-train the model with a modified group relative policy optimization (GRPO) algorithm to enable learned use of the external memory. On long-context retrieval and agentic scenarios, we observe a better performance than length-extrapolation method (YaRN), a better or comparable performance than specialized methods.
Talk #2: Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models - Qizheng Zhang (Stanford)
Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely on context adaptation -- modifying inputs with instructions, strategies, or evidence, rather than weight updates. Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for concise summaries, and from context collapse, where iterative rewriting erodes details over time. Building on the adaptive memory introduced by Dynamic Cheatsheet, we introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation. ACE prevents collapse with structured, incremental updates that preserve detailed knowledge and scale with long-context models. Across agent and domain-specific benchmarks, ACE optimizes contexts both offline (e.g., system prompts) and online (e.g., agent memory), consistently outperforming strong baselines: +10.6% on agents and +8.6% on finance, while significantly reducing adaptation latency and rollout cost. Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback. On the AppWorld leaderboard, ACE matches the top-ranked production-level agent on the overall average and surpasses it on the harder test-challenge split, despite using a smaller open-source model. These results show that comprehensive, evolving contexts enable scalable, efficient, and self-improving LLM systems with low overhead.
Agenda
5:30pm - Doors open
6:00-7:00pm - Talks
7:00-8:00pm - Mingling