Cover Image for AI Safety Poland Talks #12
Cover Image for AI Safety Poland Talks #12
Avatar for AI Safety Poland
Presented by
AI Safety Poland
AI Safety Poland is a community in Poland dedicated to reducing the risks posed by artificial intelligence.
36 Going

AI Safety Poland Talks #12

Google Meet
Registration
Welcome! To join the event, please register below.
About Event

Welcome to AI Safety Poland Talks!

​A biweekly series where researchers, professionals, and enthusiasts from Poland or connected to the Polish AI community share their work on AI Safety.

💁 Topic: Open-Source Intelligence for AI Risk Governance
📣 Speaker: Michał Kubiak
🇬🇧 Language: English
🗓️ Date: 16.04.2026, 18:00
📍 Location: Online

Speaker Bio
Michał Kubiak is an independent researcher specializing in European AI regulation and AI risk management. His policy experience includes roles as AI Policy Officer at the European DIGITAL SME Alliance (Brussels) and at the Observatorio de Riesgos Catastroficos Globales. He has also worked as a teacher and facilitator, leading sessions at ML4Good's AI governance boot camps and at BlueDot Impact's AI Governance and AGI Strategy courses. His research background spans transformative AI policy - including work through the Supervised Program for Alignment Research (SPAR) - and earlier work in industrial mathematics, applying STEM methods to real-world problems for businesses, governments, and other institutions.

Abstract
As advanced AI brings society-wide risks - from cyberattacks and biosecurity threats to labour market disruption and privacy erosion - regulators face a critical evidence dilemma: how to govern effectively when safety research lags behind capability development.

This presentation explores how open-source intelligence can help bridge the gap between safety research and global decision-making. We will examine using OSINT tools for: continuous risk assessment; applying layered, defence-in-depth approaches to contain AI-driven threats; strengthening AI governance through Technical AI Governance (TAIG) measures; and establishing international "AI Red Lines" to prohibit the most dangerous AI uses.

Join us to explore how these approaches can inform evidence-based AI governance - and what they mean for the regulatory frameworks shaping AI's future.

Avatar for AI Safety Poland
Presented by
AI Safety Poland
AI Safety Poland is a community in Poland dedicated to reducing the risks posed by artificial intelligence.
36 Going