Cover Image for Agents in Academia Meetup (Agent Memory, Local Inference)
Cover Image for Agents in Academia Meetup (Agent Memory, Local Inference)
Avatar for Berkeley LLM Meetup
Meetup featuring open source LLM projects, with an emphasis on work out of UC Berkeley.
Registration
Welcome! To join the event, please register below.
About Event

The "Agents in Academia" meetup aims to connect PhD students, faculty, and alumni with people working in or interested in agents in industry.​​ We feature a variety of talks from academia relevant to people working on agents.

Space is limited, so please make sure to register or cancel your registration if you can no longer make it.

Talks

Learning to Use External Memory for Long-Context LLMs - Dacheng Li (UC Berkeley)

Length-extrapolation based methods effectively extend an LLM's context window. However, recent deployment of LLMs require them to continuously acquire information and reuse experience, rendering a fixed bound inadequate. We explore external-memory–augmented LLMs trained with reinforcement learning (RL). First, we outline a taxonomy of external-memory approaches and implement a simple yet general scaffold that extends a pre-trained LLM with an unbounded external memory exposing a summary-based put and an agentic get. Second, we post-train the model with a modified group relative policy optimization (GRPO) algorithm to enable learned use of the external memory. On long-context retrieval and agentic scenarios, we observe a better performance than length-extrapolation method (YaRN), a better or comparable performance than specialized methods.

Intelligence per Watt: Measuring Intelligence Efficiency of Local AI - Jon Saad-Falcon (Stanford)

Large language model (LLM) queries are predominantly processed by frontier models in centralized cloud infrastructure. Rapidly growing demand strains this paradigm, and cloud providers struggle to scale infrastructure at pace. Two advances enable us to rethink this paradigm: small LMs (<=20B active parameters) now achieve competitive performance to frontier models on many tasks, and local accelerators (e.g., Apple M4 Max) run these models at interactive latencies. This raises the question: can local inference viably redistribute demand from centralized infrastructure? Answering this requires measuring whether local LMs can accurately answer real-world queries and whether they can do so efficiently enough to be practical on power-constrained devices (i.e., laptops). We propose intelligence per watt (IPW), task accuracy divided by unit of power, as a metric for assessing capability and efficiency of local inference across model-accelerator pairs. We conduct a large-scale empirical study across 20+ state-of-the-art local LMs, 8 accelerators, and a representative subset of LLM traffic: 1M real-world single-turn chat and reasoning queries. For each query, we measure accuracy, energy, latency, and power. Our analysis reveals 3 findings. First, local LMs can accurately answer 88.7% of single-turn chat and reasoning queries with accuracy varying by domain. Second, from 2023-2025, IPW improved 5.3x and local query coverage rose from 23.2% to 71.3%. Third, local accelerators achieve at least 1.4x lower IPW than cloud accelerators running identical models, revealing significant headroom for optimization. These findings demonstrate that local inference can meaningfully redistribute demand from centralized infrastructure, with IPW serving as the critical metric for tracking this transition. We release our IPW profiling harness for systematic intelligence-per-watt benchmarking.

​Agenda

​​​5:30pm - Doors open

​​​6:00-7:00pm - Talks

7:00-8:00pm - Mingling

Location
930 Montgomery St
San Francisco, CA 94133, USA
Avatar for Berkeley LLM Meetup
Meetup featuring open source LLM projects, with an emphasis on work out of UC Berkeley.