

Inside SaferAI: Making AI risk measurable
How do you actually measure the risk a frontier AI system poses? Join SaferAI and BlueDot Impact for a look at how SaferAI is building the technical infrastructure that governments and companies use to manage AI risk, from quantitative risk models to public ratings of frontier labs to the standards that codify EU AI Act compliance.
SaferAI works at the intersection of technical risk research and AI policy. Their work spans four streams: risk modeling across cyber, CBRN, loss of control; frontier AI risk management, which advises leading AI companies on operationalizing their safety frameworks and tracks the ecosystem through the SaferAI ratings; standards leadership at CEN-CENELEC (EN 18228) and ISO/IEC (42119-8); and policy engagement with EU regulations, OECD/G7, and more generally middle powers.
We'll hear from Henry Papadatos (Executive Director) on SaferAI's broader work and theory of change, then Jakub Kryś (Research Scientist) on the quantitative risk modeling framework specifically — how you translate AI capabilities into measurable estimates of real-world harm.
You'll come away with:
A clear picture of what SaferAI does across all four work streams and why it matters
How quantitative risk modeling works in practice, and what it lets policymakers do that qualitative assessment can't
Open roles at SaferAI and what they look for in candidates
🎙️ Speakers
Henry Papadatos – Executive Director
Jakub Kryś – Research Scientist
Open roles at SaferAI: https://www.safer-ai.org/careers (currently hiring a Standards Researcher, Governance Researcher, Research Engineer, and Policy Associate)