

Formal Guarantees for Frontier AI
Formal Guarantees for Frontier AI
Gagandeep Singh – Assistant Professor at UIUC, develops formal certification, monitoring, and synthesis methods for frontier AI systems (NSF CAREER, Open Philanthropy)
Formal verification is often dismissed as too rigid, complex, or unscalable for frontier AI systems, including LLMs, VLMs, and agentic systems. In practice, this perception has led many researchers and developers to rely on less rigorous alternatives such as benchmarking, red teaming, and adversarial attacks to evaluate safety and performance.
In this talk, I will present a new class of efficient formal verification methods for frontier AI systems that certify safety properties such as secure code generation and catastrophic conversational risks, delivering stronger generalization guarantees than standard evaluation approaches. These results position formal verification as a necessary foundation for building reliable and trustworthy systems at scale, and demand sustained, substantial investment.
Papers / readings:
BEAVER: An Efficient Deterministic LLM Verifier https://arxiv.org/abs/2512.05439
How Catastrophic is Your LLM? Certifying Risk in Conversation https://arxiv.org/abs/2510.03969
Lumos: Let there be Language Model System Certification https://arxiv.org/abs/2512.02966
Guaranteed Safe AI seminars
The monthly seminar series on Guaranteed Safe AI brings together researchers to advance the field of building AI with high-assurance quantitative safety guarantees.
Follow the series
Apply to speak
Video recordings
Mailing list
Series info website
Feedback
Donate 🧡