

VALUES-DRIVEN RISK ASSESSMENT WORKSHOP
An interactive workshop that explores how to practically integrate values and ethics into AI risk assessment frameworks, moving beyond principled governance to practical approaches that actually shape how AI systems get built and deployed.
What we'll cover:
Identifying value misalignments in AI systems (between stakeholders, between stated principles and actual behavior, between short-term metrics and long-term impact)
Mapping stakeholder and personal values to technical decisions (making abstract values concrete and actionable)
Building risk assessment frameworks that reflect what genuinely matters - not just what's easy to measure
Moving from principles-based governance to objective, testable approaches
Who should attend:
Anyone working on agentic AI development, risk management, product leadership, AI governance, or responsible AI practices. Whether you're building AI systems, evaluating them, or shaping policy around them - if you're wrestling with how to make AI governance actually work in practice, this workshop is for you.
Format:
Interactive, hands-on workshop limited to 10-15 participants to enable meaningful discussion and peer learning. We'll work through real examples, share frameworks, and build practical approaches together.
What you'll leave with:
Concrete frameworks and methodologies for values-driven risk assessment that you can apply immediately to your own work - whether you're shipping AI products, evaluating systems, or shaping governance.
Facilitated by:
Karol See, Responsible AI Engineer and product leader. Recently led development of AI testing platform using neurosymbolic AI and LLM evaluation frameworks. MSc in Artificial Intelligence and Ethics (Northeastern University).
Co-hosted with ClimateAction.tech's Responsible AI strand and ustwo.