Cover Image for Verifiable AI-Enabled Autonomous Systems with Conformal Prediction
Cover Image for Verifiable AI-Enabled Autonomous Systems with Conformal Prediction
Avatar for Guaranteed Safe AI Seminars
Monthly seminars on Guaranteed Safe AI R&D. https://www.horizonomega.org/p/guaranteed-safe-ai

Verifiable AI-Enabled Autonomous Systems with Conformal Prediction

Register to See Address
Registration
Welcome! To join the event, please register below.
About Event

Verifiable AI-Enabled Autonomous Systems with Conformal Prediction
Lars Lindemann – ETH Zurich

Accelerated by rapid advances in machine learning and AI, there has been tremendous success in the design of AI-enabled autonomous systems in areas such as autonomous driving, intelligent transportation, and robotics. However, these exciting developments are accompanied by new fundamental challenges that arise regarding the safety and reliability of these increasingly complex control systems in which sophisticated algorithms interact with unknown dynamic environments. Imperfect learning algorithms, system unknowns, and uncertain environments require design techniques to rigorously account for uncertainties. I advocate for the use of conformal prediction (CP) — a statistical tool for uncertainty quantification — due to its simplicity, generality, and efficiency as opposed to inefficient and conservative model-based verification techniques. My goal is to show how we can use CP to solve the problem of predicting failures of AI-enabled autonomous systems during their operation. Particularly, we leverage CP and design two predictive runtime verification algorithms (an accurate and an interpretable version) that compute the probability that a high-level system specification is violated. We will also discuss how we can use robust versions of CP to deal with distribution shifts that arise when the deployed system is different from the system during design time. Lastly, I will outline how we can use CP to solve the problem of designing safe learning-enabled systems.

Paper: https://arxiv.org/pdf/2409.00536

Guaranteed Safe AI seminars

​​​​​​​​​​The monthly seminar series on Guaranteed Safe AI brings together researchers to advance the field of building AI with high-assurance quantitative safety guarantees.

Avatar for Guaranteed Safe AI Seminars
Monthly seminars on Guaranteed Safe AI R&D. https://www.horizonomega.org/p/guaranteed-safe-ai