

The Risk Decisions Behind AI: Human dynamics, Trade-offs, Blind Spots, and What Comes Next | AI Boston Week 2025 - October 1
The Risk Decisions Behind AI: Human dynamics, Trade-offs, Blind Spots, and What Comes Next
October 1 | AI Boston Week 2025
Panel:
A cross-sector panel of experts from industry, research, and government will share firsthand insights from building, governing, and deploying AI systems:
Michael Muller – Researcher and Inventor @ IBM
David Piorkowski – Human-AI Collaboration and Governance @ IBM
Sabrina Mansur – Director @ MA AI Hub
Overview:
Which AI risks actually matter most—and how do people decide what to do about them?
As AI systems scale, conversations around risk often remain abstract. Terms like misinformation, bias, misuse, and system failure are widely acknowledged—but the harder questions about prioritization and trade-offs and timing are frequently left unaddressed.
In the real world, AI teams are constantly making decisions under pressure: Which risks should be mitigated now? Which can be monitored? And which—at least for the moment—can be ignored?
This session goes beyond the buzzwords to examine the human dynamics behind AI risk decisions. Together, we’ll explore:
How incentives from regulators, investors, and markets influence risk priorities
Where human blind spots, heuristics, and cognitive biases complicate safety efforts
What “human-in-the-loop” oversight looks like in practice—and how it affects accountability as AI agents take on more autonomy
The real-world strategies emerging to navigate trade-offs between safety, innovation, and scale
What to Expect:
This interactive session invites attendees to step into the role of AI decision-makers. You’ll face real-world constraints, allocate limited “risk budgets,” and navigate dilemmas that teams face every day. Expect thought-provoking exercises that reveal how values, constraints, and competing incentives shape the future of AI safety.
Agenda:
Welcome & Networking
Panel Discussion
Get Involved: Opportunities to Collaborate on Aethos Research
Decision Lab: AI in practice
Thought-provoking scenarios
Hands-on exercises
A deeper look at how values, constraints, and incentives collide in the work of building safe, responsible AI
Join Us:
This session is for anyone curious about how real AI risk decisions are made not just in theory, but in practice.. You'll hear from experts, take part in hands-on activities, and walk away with a better understanding of the challenges and trade-offs behind building safe and responsible AI.
We hope to see you there!