AI Security 'Opportunities' 😈: Guardrails, Sandboxes, and Keeping Your Agents on a Leash
Registration
About Event
AI agents are powerful — and that's exactly the problem. Prompts and skills can guide agent behavior, but when they decide to go rogue and try things their own way, soft guardrails won't save you. In this session, we'll explore the full spectrum of AI security: from finding vulnerabilities in AI-generated code to hardening your own work environment. We'll cover practical techniques like running agents in containers, sandboxing tool access, preventing PII leakage, and enforcing hard capability limits that agents simply can't talk their way out of. Because "please don't do that" isn't a security strategy.