Cover Image for “Ignore All Previous Instructions": How to Lead an AI-Secure Organization
Cover Image for “Ignore All Previous Instructions": How to Lead an AI-Secure Organization
Avatar for The Opening Door
Presented by
The Opening Door
A hub for trainings and events hosted by The Opening Door, focused on responsible AI, and the future of work.
Hosted By

“Ignore All Previous Instructions": How to Lead an AI-Secure Organization

Zoom
Registration
Welcome! To join the event, please register below.
About Event

AI tools are only as trustworthy as the guardrails around them — and right now, most organizations don't have enough of them.

Prompt injection is one of the fastest-growing threats in enterprise AI: a technique that manipulates AI systems into ignoring their instructions, leaking sensitive data, or acting in ways their operators never intended. And it doesn't require a sophisticated hacker. Sometimes, all it takes is four words.

In this webinar, you'll learn what prompt injection actually is, why it matters for your organization, and what responsible AI leadership looks like in response. No technical background required — just a commitment to getting this right.

You'll leave with:

  • A clear, jargon-free understanding of prompt injection and why it's a boardroom issue

  • The questions every leader should be asking their AI vendors and internal teams

  • A practical framework for building AI security into your governance strategy

Your AI strategy is only as strong as your understanding of its vulnerabilities. Let's close that gap together.

Avatar for The Opening Door
Presented by
The Opening Door
A hub for trainings and events hosted by The Opening Door, focused on responsible AI, and the future of work.
Hosted By