

Auditing AI in Conflict: Measuring Narrative Risk Across Languages and Models
AI is increasingly becoming a dominant “source of truth” in politics and war.
More and more often AI is becoming one of the first stops for interpreting political and wartime events. In this session, we’ll present a replicable audit framework for measuring narrative risk in LLM outputs across languages and share evidence-based findings — including a real-time censorship demo captured on screen (Yandex Alice).
We’ll also outline how institutions can request a custom audit and discuss pilot partnerships for 2026.
Welcome & format
Why AI propaganda matters now
Audit framework (brief): what we measure, how it works, and why it scales
Evidence & key findings: language-triggered bias, false-balance failure modes, and real-time censorship demo (Yandex Alice)
Independent validation & complementary research insights
Pilot partnerships (2026) + how to request a custom audit + closing
Q&A
Speakers
Ihor Samokhodskyi
Dr. Dariia Opryshko
This event is organised with the support of the European Union within its Eastern Partnership Civil Society Fellowship programme