A Multi-Tiered Approach to Bridge AI Responsibility Gaps by Caroline Baumöhl
Talk description:
Algorithms guide what media we consume, which jobs we get hired for, who appears on our dating profile site, and whether our loan application gets approved or rejected. Increasingly, AI influences issues of criminal justice, suggesting who should be granted parole or asylum. In the future, AI might be charged with making life and death decisions - autonomous vehicles deciding whom to sacrifice in an unavoidable crash, and lethal autonomous weapon systems deciding whether to attack or surrender. The stakes are high and will only get higher.
Whilst in the long-term AI might make better and fairer decisions than we humans ever could, it will inevitably make errors along the way. Who should be responsible for these? If there is no one we can appropriately hold responsible, we face a so-called “Responsibility Gap”. Since the term was coined in 2004, AI-induced responsibility gaps have received much scholarly attention. Many solutions have been proposed, but each has significant limitations. The issue remains far from resolved. Rather than attempting to find “the” solution, Caroline's dissertation analyses whether we can combine multiple, in themselves flawed, proposed solutions, to arrive at an outcome that is greater than the sum of its parts.
About the speaker:
Caroline Baumöhl is finishing her Masters in Practical Ethics at the University of Oxford, where she is developing a framework to resolve AI induced responsibility gaps for her dissertation. She was a Winter Research Fellow at the Centre for the Governance of AI, where she worked on risk estimation techniques for frontier AI models. She is also one of the Founders of Impact Drive, which is a platform for young leaders interested in ethical entrepreneurship, social impact, and the future of AI. She has a background in Law (University of Cambridge) and Business Administration (Ludwig Maximilian University of Munich).