Cover Image for Trust Boundaries and Guardrails in Multi-Agent Systems | DeCompute 26, SF Chapter
Cover Image for Trust Boundaries and Guardrails in Multi-Agent Systems | DeCompute 26, SF Chapter
Avatar for SL Events
Presented by
SL Events
70 Went

Trust Boundaries and Guardrails in Multi-Agent Systems | DeCompute 26, SF Chapter

Registration
Past Event
Welcome! To join the event, please register below.
About Event

About Event

Hosted by Silence Laboratories, in partnership with Linux Foundation Decentralized Trust.

As organizations move from AI assistants to autonomous agents, the security perimeter has dissolved. Agents access live systems, call other agents, hold credentials, and make decisions — often faster than any human oversight loop can follow. The question is no longer whether agents will act autonomously. It is whether we have the infrastructure to govern how they do.

This invite-only roundtable brings together a small group of security leaders, architects, and builders for a direct, practitioner-led conversation on the trust and guardrail problems that matter most right now.


Keynote by Jason Clinton, Deputy CISO at Anthropic

Jason leads security strategy at Anthropic, one of the world's frontier AI labs, and is co-author of Securing AI Agents: Foundations, Frameworks, and Real-World Deployment. Previously, he led Chrome Infrastructure Security at Google defending against nation-state threats.

Jason will open the roundtable with a sharp, firsthand framing of the agentic security challenge — and what architectural responses are beginning to emerge.


Event Sequence

3:00 PM — Welcome Silence Laboratories open the session, framing the challenge from two angles: cryptographic trust infrastructure and agentic threat detection.

3:05 PM — Keynote Jason Clinton sets the tone.

3:07 PM — Roundtable: Trust Boundaries and Guardrails in Multi-Agent Systems

The discussion follows the natural lifecycle of an agent — from first access to collective action.

Stage 1 — Context Access and Integrity Agents require data and context to function. That access is also the earliest and most exploitable attack surface. This stage covers how adversarial inputs — malicious emails, poisoned documents, compromised tool responses — corrupt agent behavior at the source, and what a principled, secure context access model looks like in practice.

Stage 2 — Agent Action and Accountability Once an agent has context, it acts — often with real credentials, persistent memory, and access to downstream systems. This stage addresses cryptographic identity for agents, private multi-agent orchestration, and the accountability gap: when an autonomous agent takes a consequential action, what audit trail exists, and who is responsible?

Stage 3 — Governance Across Agent Networks The hardest problems emerge when agents coordinate. This stage examines how trust and permissions should propagate across orchestrators and sub-agents, what meaningful consent enforcement looks like at machine speed, and what regulatorily defensible control over agentic systems actually requires.

4:15 PM — Open Discussion & Networking


Who Should Attend: Security leaders, AI architects, and founders building or governing agentic infrastructure — across enterprise security, financial services, and AI-native companies.

Invite-only. San Francisco, CA.

Location
The St. Regis San Francisco
125 3rd St, San Francisco, CA 94103, USA
Avatar for SL Events
Presented by
SL Events
70 Went