Cover Image for Security and Governance for AI Agents @ Stanford
Cover Image for Security and Governance for AI Agents @ Stanford
Avatar for SAFE-MCP
Presented by
SAFE-MCP
175 Going

Security and Governance for AI Agents @ Stanford

Registration
Welcome! To join the event, please register below.
About Event

About the Event

Security and Governance for AI Agents

AI agents are rapidly moving from experimental tools to autonomous workers performing tasks traditionally done by humans. As organizations begin deploying hundreds or thousands of agents alongside human teams, a critical question emerges:

How do we secure and govern AI agents at scale?

This event explores how security and governance must evolve for agentic systems where agents act autonomously, communicate through APIs and tools, and behave in non-deterministic ways. Traditional security models are no longer enough.

Join founders, security researchers, and industry leaders to discuss how we should think about governing agent behavior, not just defending perimeters.


What We’ll Cover

  • Why AI agents require a new security and governance model

  • How agent attacks go beyond “outside-in” threats to behavioral risks

  • Governing agents through APIs, tools, and MCP workflows

  • What existing frameworks miss and what “SOC 2 for AI agents” could look like

  • How SAFE-MCP helps standardize agent security and governance


SAFE-MCP

SAFE-MCP is an open-source specification for AI agent and MCP attack vectors and mitigation techniques. Initiated by Astha.ai and now part of the Linux Foundation and OpenID Foundation, SAFE-MCP is driven by a global community working to standardize agentic security.


The Deep-Tech Community

We help AI/ML and deep-tech researchers become founders by bringing the right people and resources together.


🍕 Pizza and drinks provided

Location
Stanford University
450 Jane Stanford Way, Stanford, CA 94305, USA
Avatar for SAFE-MCP
Presented by
SAFE-MCP
175 Going