Threat Models & Mitigations from Physical AGI - Benjamin Alt
Most AI takeover scenarios involve AI crossing the cyberphysical boundary and acting autonomously in the real world. As a community, we currently don't have robust epistemics on how that would actually happen, or how to detect it early. Meanwhile, physical AI capabilities are developing rapidly across drones, logistics robots, lab automation and autonomous weapons. Frontier models are already being deployed on physical hardware in these contexts, and the threat landscape is changing. The robotics community has decades of safety experience, but it is unclear to what extent these techniques can be applied in the context of AGI or ASI.
This talk and interactive Q&A hosted by Safe AI Germany (SAIGE) sketches five threat models (asymmetric autonomous violence, AI-assisted bioweapons development, infrastructure capture via self-replication, AI-enabled coups, embodied emotional manipulation) that arise from physical AGI and proposes physical AI evals, run on real hardware, as a way to resolve the epistemic uncertainty around which of these threats are near, which are far, and which we are missing entirely.
🎙 The Speaker:
Benjamin Alt is the Technical Director at the University of Bremen's AICOR Institute for Artificial Intelligence, where he works on safe cognitive architectures for robots, neurosymbolic representations and hybrid planning systems.
Who should attend?
This session is recommended for robotics engineers, ML researchers, and policy experts tracking frontier risks, as well as anyone curious about AI Safety and eager to understand the emerging challenges of physical AGI.
Date: Wednesday 15th April, 2026
Time: 18:00 - 19:00 CET
Location: See Google Meet link.