


Building Intelligent Memory: Graphs for AI Agent Context and Retrieval
As AI agents become more sophisticated, their ability to maintain coherent, retrievable memory across conversations becomes critical for delivering personalized and contextually aware experiences. This talk explores how graph thinking can transform AI agent memory systems by modeling knowledge as interconnected entities and relationships rather than static text chunks.
We'll examine practical approaches for integrating graph databases into AI agent architectures, comparing direct framework integration versus tool-based approaches through the Model Context Protocol (MCP). Attendees will learn how to extract entities from conversational history, build dynamic knowledge graphs that evolve with each interaction, and implement GraphRAG patterns that enable rapid memory retrieval for agent context windows.
The session includes hands-on demonstrations using Neo4j and Dgraph as complementary graph database solutions, with concrete implementation examples in the Vercel AI SDK. By the end of this talk, developers will understand how to architect graph-powered memory systems that help AI agents maintain long-term context, discover relevant connections across conversations, and provide more intelligent, personalized responses.
Key Takeaways: Graph modeling principles for AI memory, integration patterns and trade-offs, entity extraction and summarization techniques, and production-ready implementations with popular graph databases.
Speaker William Lyon is an AI engineer at Hypermode where he works to improve the developer experience of putting AI applications into production. Previously he worked as a software developer at Neo4j and other startups. He is also the author of the book “Fullstack GraphQL Applications” and earned a masters degree in Computer Science from the University of Montana. You can find him online at lyonwj.com
