

Give your AI Agents Eyes and Ears. Perception 101 with VideoDB
Join AI engineers, startups, and creative professionals for a hands-on workshop on building real-time perception for agents.
LLMs gave us reasoning. RAG gave us retrieval. What’s missing in the modern agent stack is perception: the ability to see, understand and act on real world.
This workshop is with Ashu, founder of VideoDB. You’ll learn how to convert continuous media (screen, mic, camera, RTSP, files) into a structured context your agent can use.
Who should attend:
Engineers building agents that need continuous and temporal awareness (not one-shot screenshots).
If you are exploring skills for OpenClaw and wants it to have eyes and ears for any task.
Research teams building in physical AI, AI companion robots and wearables.Product teams building meeting bots, desktop copilots, monitoring/ops, QA/compliance.
Expect production-grade demos, takeaways you can reuse and an hour of networking to share ideas in agentic perception, video, multimodal AI and frontier tech.
For more, check https://github.com/video-db
What You’ll Discover:
What “perception” actually means for agents: continuous, temporal, multi-source, searchable, actionable.
How to support three input modes with one mental model: files, live streams, desktop capture.
How to build searchable memory so your agent can retrieve results with playable evidence, not vibes.
How to move from batch video AI to real-time event streams your agent can react to immediately.
Plus:
Claude code/ codex skills for vibe coding within your stack.
Refreshments and networking session with top builders working on agents + multimodal infra.