

IEEE x Leonis Capital AI Safety Mixer
Join us for an evening of AI safety talks and conversations, and get to know researchers and practitioners working on some of the most important safety challenges of our time! The IEEE Computer Society SF Bay Area and Leonis Capital are throwing a mixer at the Hanwha AI Center in downtown SF.
Whether you're deep in safety research, building enterprise safeguards, or thinking about safety-related startups ideas, come hang out. We'll have drinks and bites while you chat with researchers, founders, and others who spend their days figuring out how to make advanced AI systems actually safe and beneficial.
Planned Agenda
5:30 Open Doors + Networking
6:00 Talk 1: Bill Stout (Technical Director, AI Product Security, ServiceNow) - A layered Enterprise Architecture approach to AI Red Teaming and ownership
6:20 Talk 2: Brian Bartoldson (Researcher, LLNL) - AI Safety at Scale
6:40 Talk 3: Fady Yanni (Co-Founder, HackAPrompt) - Prompt injection risk landscape
7:00 Panel: Ranjan Sinha (Fellow & CTO, IBM, Enterprise AI and Data) [Moderator] - Operationalizing AI Safety
7:30 Networking + Mixer
Hosts
Leonis Capital is a research-driven VC fund based in Silicon Valley. We invest in Seed and Pre-Seed stage companies building AI-native products.
The IEEE Computer Society SF Bay Area connects professionals, researchers, and academics across the Bay Area to advance computing innovation through technical talks, networking events, and collaboration on emerging technologies in AI, software, and computer engineering.
Talk Details
Bill Stout (Technical Director AI Product Safety, ServiceNow) - A layered Enterprise Architecture approach to AI Red Teaming and ownership
Abstract: Organizations face critical challenges in AI governance and security: unclear ownership across multiple teams, scattered AI components, red teams focused narrowly on prompt injection rather than system-level threats, and incident response teams lacking AI-specific detection capabilities. Auditors struggle to identify accountable parties and inventory AI assets, while developers deploy autonomous agents without adequate transactional controls or business oversight.
This talk presents a systematic framework using layered organizational views to address AI governance and security comprehensively. The approach provides a structured methodology for mapping AI ownership, assets, and dependencies across an organization. By applying business architecture layers to AI systems, teams can systematically answer who, what, where, why, and when questions about AI deployment and accountability. The framework enables AI Red Teams to: (1) scope testing beyond individual LLM vulnerabilities to entire AI systems, (2) create comprehensive component diagrams, (3) perform threat modeling at appropriate abstraction layers, and (4) validate risks systematically. Security and governance teams gain clarity on accountability chains, asset inventories, and incident response procedures specific to AI systems.
Attendees will learn practical techniques for implementing this layered approach in their organizations, including methods for identifying AI system boundaries, establishing clear ownership, and integrating AI security testing into existing processes.
Bio: Founder ServiceNow AI Red Team, AI Blue Team, AI Alliance WG member, CoSAI, BSA, DEFCON AI Village steering committee. 42 years of uninterrupted experience in information technology & security. Evangelizing the AI Shared Responsibility model. Former FedRAMP ISSO, former DR tech lead at VMware, and Enterprise Architect at VMware and Life Tech. Most fascinating place I’ve worked is at SRI International in Menlo Park, birthplace of the Internet, the Mouse, SIRI, the Stanford dish, Psychic remote viewing, and MK Ultra.
Brian Bartoldson (Researcher, LLNL) - AI Safety at Scale
Abstract: Scaling the compute directed towards AI has created new capabilities and new risks, producing concerns ranging from bias to human extinction. I will demonstrate how AI-based defenses like automated red-teaming also improve with scale. Foundational to our method for training red-teaming agents -- co-developed with Yoshua Bengio’s group at MILA -- is an off-policy reinforcement learning (RL) algorithm that can efficiently leverage LLNL’s vast HPC resources (to appear at NeurIPS 2025). Beyond granting agential capabilities, RL improves reasoning, and I will touch on our work that scales reasoning to improve model robustness to adversarial attacks.
Bio: Brian Bartoldson is a staff scientist in Lawrence Livermore National Laboratory’s AI Research Group. He holds degrees from Gettysburg College and Florida State University. His research focuses on the efficiency and robustness of large-scale neural networks. His recent work includes leading an effort to develop the first scaling laws for adversarial robustness, pretraining LLMs on copyright-free datasets, and RL post-training of LLMs to create automated red-teaming agents.
Fady Yanni (Co-Founder, HackAPrompt) - Prompt injection risk landscape
Abstract: TBD
Bio: TBD