AI for compliance: spotting where AI goes rogue
As teams in financial services adopt AI across compliance functions, the challenge is how to ensure the outputs are reliable, reviewable, and regulator-ready. This workshop focuses on the design of human-in-the-loop compliance AI workflows that are safe and auditable.
This is a practical workshop on how to build defensible AI workflows that can be relied on in operational settings.
You’ll learn how to:
Design evaluation models that assess AI outputs for accuracy, completeness, and regulatory alignment
Define human review points and design interventions that meaningfully improve reliability
Use confidence scoring and guardrails to detect uncertainty, prevent drift, and constrain model behaviour
Embed AI governance building blocks that make workflows explainable and auditable
Create escalation logic that routes higher-risk outputs for enhanced oversight
Capture evidence and audit trails to meet regulatory and internal assurance expectations
Design for repeatability so outputs remain consistent across users, prompts, and time
Who should attend
Compliance leaders, policy teams, MLROs, risk and assurance professionals, audit teams, and anyone responsible for designing or overseeing AI-enabled compliance processes.
