

Security and Governance for AI Agents
About the Event
Security and Governance for AI Agents
AI agents are rapidly moving from experimental tools to autonomous workers performing tasks traditionally done by humans. As organizations begin deploying hundreds or thousands of agents alongside human teams, a critical question emerges:
How do we secure and govern AI agents at scale?
This event explores how security and governance must evolve for agentic systems where agents act autonomously, communicate through APIs and tools, and behave in non-deterministic ways. Traditional security models are no longer enough.
This session brings together founders, security researchers, and industry leaders to discuss how we should think about governing agent behavior, not just defending perimeters.
What We’ll Cover
Why AI agents require a new security and governance model
How agent attacks go beyond “outside-in” threats to behavioral risks
Governing agents through APIs, tools, and MCP workflows
What existing frameworks miss and what “SOC 2 for AI agents” could look like
How SAFE-MCP helps standardize agent security and governance
Hosts:
SAFE-MCP
SAFE-MCP is an open-source specification for AI agent and MCP attack vectors and mitigation techniques. Initiated by Astha.ai and now part of the Linux Foundation and OpenID Foundation, SAFE-MCP is driven by a global community working to standardize agentic security.
Workato is an enterprise automation and integration platform that orchestrates workflows across applications, data, and systems enabling secure, governed execution of complex processes as organizations adopt AI agents at scale. Get a free sandbox to explore Workato's end-to-end capabilities.
The Deep-Tech Community
We help AI/ML and deep-tech researchers become founders by bringing the right people and resources together.
Food and drinks provided