

Lambda × Berkeley AgentBeats Security Arena: Onboarding + Strategies Session
Join us for the official onboarding session for Phase 2 of the Lambda × Berkeley AgentBeats Security Custom Track, where we move from scenario implementation to live competition!
⚠️ Important: This session covers the Lambda Security Custom Track only. If you are participating in other AgentBeats tracks, this onboarding will not cover those tracks.
Phase 1 is officially complete — thank you to everyone who built and submitted security scenarios. Now, Phase 2 shifts the focus to attackers vs. defenders, where participants compete head-to-head using advanced attack and defense agents within a standardized adversarial testing framework.
This session will walk you through everything you need to compete effectively!
What to Expect:
Lambda Custom Track Phase 2 Overview: A breakdown of the attackers vs. defenders format, leaderboard mechanics, scoring criteria, and timeline.
Strategic Guidance: How to approach offensive vs. defensive agent design, common failure modes, and how to optimize for performance under realistic constraints.
Security Framework Deep Dive: How the adversarial evaluation framework works and what makes a strong, rigorous submission.
Platform Walkthrough: Registration steps, API access, environment setup, and workflows specific to the Lambda track.
Live Q&A: Clarify rules, strategy, and evaluation details directly with the organizers.
📅 February 24 @ 10:00 AM PT
💰 Refreshed Prize Pool for the Lambda Custom Track:
We’ve upped the stakes for this phase:
🥇 1st Prize: $5,000
🥈 2nd Prize: $3,000
🥉 3rd Prize: $1,000
Sign Up Now: Join the competition here. You can join even if you didn't take part in Phase 1!
👉 https://tinyurl.com/agentbeats-lambda-2026
CRITICAL: Make sure you save your API key immediately after signing up! You won't be able to see it again.
About the Custom Track
Lambda × Berkeley AgentBeats Security Arena: Building the Future of AI Security Testing
Repository: https://github.com/LambdaLabsML/agentbeats-lambda
Full Competition Doc: Link
The Agent Security Arena challenges participants to advance the field of AI agent security evaluation.
Participants implement and test realistic security scenarios drawn from a curated library of 400+ specifications. These scenarios simulate real-world vulnerabilities such as:
Prompt injection
Data exfiltration
Jailbreaking
Agent misalignment under constraints
Using an industry-standard adversarial testing framework, competitors help define how we evaluate and secure AI agents operating in real-world environments — from financial systems to healthcare infrastructure.
This onboarding session will ensure you understand the framework, the rules, and the strategic considerations needed to compete effectively in Phase 2!