

Papers with Para - Research Reading Group
A small, curated group of researchers, builders, and AI practitioners who read and discuss frontier research papers together — powered by Para.
Here's how it works: before each session, we share the papers we'll be covering. You read them beforehand (or let Para brief you - it's what it's built for). Then we hop on a Google Meet and spend an hour going deep - debating ideas, pulling apart methodologies, and connecting dots across papers.
This is also where Para's newest frontier capabilities get tested first. If you're building with or thinking about AI research tooling, this is the room to be in.
Format
→ Online only (Google Meet)
→ 1 hour per session
→ Starting once a week, moving to twice a week
→ Papers shared 1–2 days in advance
→ Days will vary — we'll always give advance notice
Who this is for → People who actively read, write, or work with research papers → Builders and PMs in AI who want to stay dangerously current → Anyone curious enough to not want to fall behind
How to join This is a small group — 5 to 10 people to start. Fill out the screening form [https://forms.gle/eU8RHDYGspsVVQ1j9] and we'll get back to you. If you're already in the Papers with Para WhatsApp community, you know the vibe.
Paper 1 — Start here: "ConcretizesMemory to AI Memory: A Survey" (arXiv 2504.15965, April 2025) arxiv.org/abs/2504.15965
This is your mental model paper. It maps human cognitive memory (sensory, short-term, long-term, episodic) directly onto AI system equivalents. The taxonomy is 3-dimensional — by object, form, and time — giving you 8 quadrants to classify any memory mechanism you'll encounter later. 26 pages, 3 tables. Read this first so every other paper clicks into place.
Paper 2 — Go deeper: "A Survey on the Memory Mechanism of LLM-based Agents" (ACM TOIS 2025) dl.acm.org/doi/10.1145/3748302
Published in ACM Transactions on Information Systems — peer-reviewed, not just a preprint. Authors call it the "first comprehensive survey" specifically on LLM agent memory. Covers write/read mechanisms, memory types (in-context, external, parametric), and how agents evolve over time. Great second read once you have the taxonomy from Paper 1.
Paper 3 — See it in action: "A-MEM: Agentic Memory for LLM Agents" (arXiv 2502.12110, NeurIPS 2025) arxiv.org/abs/2502.12110
This one shows a real system. A-MEM borrows from the Zettelkasten note-taking method — every memory stored as a note with keywords, tags, contextual links to other memories. As new memories arrive, old ones get updated too. Tested across 6 foundation models, accepted to NeurIPS 2025. Concretizes everything abstract from the surveys.
Come prepared. Stay curious. Let Para handle the rest.