

Agentic Coding Summit #1: Why Does My AI Forget Everything?
You spend 20 minutes explaining your project architecture to Copilot, and the next day it has no idea. You set up Claude with detailed instructions, and a new session starts from scratch. Context is lost, instructions need repeating, and all the knowledge built up about a project just disappears.
The first Agentic Coding Summit explores how AI coding tools handle memory and context persistence across sessions. Practitioners share their real-world workflows and workarounds in 20-minute talks, followed by a roundtable discussion with all speakers.
Topics include GitHub Copilot Memory, Claude Memory and Projects, Cursor Rules, and other approaches to making your AI tools remember what matters.
This event is for developers and anyone who works with AI coding tools on a daily basis.
🎤 Want to speak? We're looking for practitioners to share their experience. Submit your talk via Sessionize: https://sessionize.com/agentic-coding-summit-ai-memory/
Schedule:
12:00 – Welcome & Introduction
12:10 – Talk 1: Dr. Matthias Liebeck: GitHub Copilot Memory: From Black Box to Memory Bank
GitHub Copilot Memory promises to give your AI a long-term memory. But what does it actually remember, and who decides? In this talk, I explore the official Copilot Memory feature, a cloud-hosted black box that learns automatically but offers no user control, and contrast it with the Memory Bank pattern: a set of Git-tracked Markdown files that Copilot reads and updates with every task. No plugins, no external tools. Just a copilot-instructions.md and a folder of Markdown files that give your AI the context it needs to stop forgetting.
12:30 – Talk 2: Ben Sufiani: Meet Jarvis, My Vibe Manager
The AI workflow that keeps me sane, organized, and productive as a founder. The exact setup with Memory + Skills + Claude Code + Linear. I never touch my project management system and just talk to it. My agent skills ensure the process is remembered and get better with every iteration. It turns out this system is also a perfect task board for my OpenClaw instance "ChristAIna" to take work of my shoulders.
12:50 – Talk 3: Anna Lübken: Would You Still Be You Without Your Memories? Onboarding Chlawe, My OpenClaw Agent
Every morning, most AI agents wake up as strangers. Their skills are intact, but they have no idea who they are working with or what happened yesterday. That is not a tool problem, it is an identity problem. I built a 4-layer memory system for my OpenClaw agent, Chlawe, treating her like a new team member that deserves proper onboarding. The system combines always-injected working memory, a scored knowledge graph where facts gain importance through use and fade over time, a self-cleaning archive, and a nightly self-improvement loop where Chlawe reviews her own mistakes and updates her instructions. Facts follow a maturity lifecycle from draft to validated to core and a compound scoring formula ensures the most trusted knowledge surfaces first. Everything is plain Markdown and fully git-friendly. I will demo it live and share a practical implementation guide you can apply to your own agents.
13:10 – Break
13:20 – Talk 4: Daina Bouquin: Knowledge That Survives the Reset: A File-Based Approach to AI Context
AI coding tools are getting better at remembering context across sessions. But the session reset problem points at something worth solving regardless: how do you treat working knowledge as a durable artifact rather than something that lives only in conversation?
Skill files and custom slash commands in Claude Code are two approaches to this question -- structured, version-controllable files that encode methodology, style, and workflow in a form the model can read on demand. They work alongside native memory features, but they also survive without them.
This talk shares a practitioner's experience building this kind of architecture, including what the failure mode looks like when you skip it, and what changes when you start treating your working knowledge as a corpus to maintain rather than a conversation to repeat.
13:40 – Talk 5: Jens Kröhnert: Lies, Confusion & Amnesia — Giving AI a Personal Memory
AI assistants are powerful—but they also lie, forget, and sometimes leave humans just as confused as they are.
In this lightning talk, I’ll share a few practical lessons from experimenting with AI-assisted software development at ORAYLIS. Why do AI systems hallucinate? Why do they “forget” things mid-conversation? And why do humans and AI so easily drift into shared confusion?
More importantly: how can we work around these limitations?
I’ll show how lightweight approaches like structured MD files, reusable AI skills, development guidelines, and even personal MCP servers can give AI something it fundamentally lacks: persistent memory and context.
The result is not perfect AI—but AI that becomes far more reliable for tasks like multi-agent software development.
14:00 – Talk 6: Sheena Yap Chan: Why Your AI “Forgets” — and How Strategic Context Design Fixes It
AI coding tools are powerful, but most of them still operate like brilliant interns who forget everything between conversations. Developers lose time repeatedly re-explaining architecture, project decisions, and coding conventions. In this session, I’ll share practical strategies for designing persistent context across AI coding tools such as Cursor, Copilot, and Claude. Instead of relying solely on prompt history, we’ll explore structured approaches using instruction files, rule systems, project documentation, and workflow design to maintain long-term context. I’ll walk through real examples of how developers preserve architectural knowledge, align AI outputs with project standards, and prevent context drift as codebases scale. Attendees will leave with repeatable methods to make their AI tools consistently useful across sessions.
By the end of this session, participants will be able to:
1. Implement at least two methods for maintaining AI memory across sessions, including structured instruction files and project-level context documentation.
2. Design a repeatable workflow for preserving architectural context when working with AI tools such as Cursor, Copilot, or Claude.
3. Evaluate when context window limits require external memory systems and apply strategies to keep AI outputs aligned with project standards.
14:20 – Break
14:30 – Talk 7: Tim Dorbandt: Autonomous Coding: Kontext für OpenClaw, OpenAI und GitHub Actions
Der KI-Administrator, Coding Agents und lokale Ressourcen: Mit OpenClaw und GitHub Actions gelingt der Durchstich auf Systemebene. Lokale Applikationen können administriert, Schnittstellen eigenständig entwickelt und Workflows über Systemgrenzen hinweg abgebildet werden.
In diesem Vortrag werden verschiedene Architekturansätze beleuchtet und es wird gezeigt, wie durch den Einsatz von Memory-MD-Files ein persistenter Kontext zwischen verschiedenen Entwicklungsvorgängen und Infrastrukturanpassungen gewährleistet werden kann.
14:50 – Roundtable with all speakers: AI Memory: Solved Problem or Still Broken?
Every talk at this summit explored a different approach to AI memory. Now it's time to zoom out: Is AI memory a solved problem, or are we still patching around fundamental limitations? In this roundtable, our speakers discuss the trade-offs, the gaps, and what they want to see next.
15:20 – End
This is a free online event via Microsoft Teams. Talks will be in German or English.
Hosted by Dr. Matthias Liebeck, .NET developer and AI speaker, organizer of the Azure Düsseldorf Meetup and author of the GitHub Copilot newsletter at ghcp.liebeck.io.