

MemMachine Workshop @ AWS NYC
About the Workshop
As AI applications move beyond simple demos into real-world systems, memory becomes essential infrastructure.
In this hands-on evening workshop, we’ll walk through how to build a production-ready, memory-aware chatbot on AWS using:
Amazon Bedrock for hosted foundation models
MemMachine, an open-source memory layer for AI agents
EC2 + CloudFormation for deployment
You’ll see how memory transforms a stateless chatbot into a stateful, intelligent agent—and how to deploy and wire everything together on AWS.
What You’ll Learn
Why stateless LLM applications break at scale
How Amazon Bedrock simplifies model inference on AWS
How to deploy MemMachine on EC2 using CloudFormation
How to build a chatbot using Bedrock-hosted models
How to add persistent memory to that chatbot using MemMachine APIs
Practical design guidance on what to remember—and what not to
What We’ll Build
By the end of the workshop, you’ll have seen an end-to-end system that includes:
A MemMachine instance deployed on AWS
A chatbot powered by Amazon Bedrock
Memory added to the chatbot via MemMachine endpoints
A clear architectural pattern you can reuse in your own applications
Who Should Attend
AI / ML engineers
Backend & platform engineers
Developers building LLM-powered applications
Architects exploring agent-based systems on AWS
No prior MemMachine experience required.
Basic familiarity with Python and LLM concepts is helpful.
Logistics
This is an in-person event at the AWS NYC office
Government-issued photo ID (original) is required for building entry
Seating is limited—please register early
Final entry instructions will be shared via email before the event