

Technical AI Safety (TAIS) Conference 2026
TAIS 2026 is a free, one-day, AI Safety event to be held Thursday, 14th May, 2026 at the Examination Schools, Oxford. This conference is proudly brought by our Partners: Oxford Martin AI Governance Initiative and Noeon Research. Sponsored by MATS. Information support by IASEAI, Foresight Institute. Volunteer support from OAISI.
For our 2026 conference, we hope to welcome back all the attendees who joined us in 2024 & 2025, plus any other domestic and international researchers and professionals interested in discussing AI Safety.
There is no waitlist or limit for virtual attendance, however in-person venue capacity is limited to 300 people (including staff, speakers and poster presenters). First come, first served.
We welcome attendees from all backgrounds, regardless of your prior research experience. This event is free, so we welcome you to come and join us!
Confirmed Speakers:
Gary Marcus | New York University
Victoria Krakovna | Google DeepMind, Future of Life
Seán Ó hÉigeartaigh | CFI, University of Cambridge
Sara Bernardini | University of Oxford
Alessio Lomuscio | Imperial College London, Safe Intelligence
Oliver Sourbut | Future of Life Foundation
Markus Anderljung | Centre for the Governance of AI
Fazl Barez | University of Oxford
An informal networking party will immediately follow the conference at a nearby private venue.
AGENDA (may be subject to change)
09:30 - 10:15: Registration, Pre-conference coffee & tea / welcome
10:00 - 10:30: Opening Ceremonies
10:30 - 11:00: Fazl Barez, "What does it mean to understand, in the age of AGI?”
11:00 - 11:30: Victoria Krakovna, “Evaluating Scheming Propensity with Realistic Honeypots” (not broadcast)
11:30 - 11:45: Coffee break 1 & poster session
11:45 - 12:15: Gary Marcus, "LLMs are not the way to alignment"
12:15 - 13:00: AI Safety Panel, “AI Safety in the age of AI reasoning”
Panelists: Gary Marcus (New York University), Victoria Krakovna (Google DeepMind, Future of Life), Andrei Krutikov (Noeon Research), Fazl Barez (University of Oxford).
Moderator: Blaine Rogers (Noeon Research)
13:00 - 14:00: Lunch & poster session
14:00 - 14:30: Sara Bernandini, “Designing Safe, Risk-Aware Autonomous Systems”
14:30 - 15:00: Alessio Lomuscio, “Robustness Verification of Machine Learning Systems”
15:00 - 15:15: Coffee break 2 & poster session
15:15 - 16:00: AIGI Panel, “Latest AI Governance Developments in the US, China & EU”
Panelists: Luise Eder, Robert Trager, Nicholas Caputo & Miro Pluckebaum
Moderator: Lisa Klaassen
16:00 - 16:30: Markus Anderljung, “Reflections on Frontier AI Regulation”
16:30 - 16:45: Coffee break 3 & poster session
16:45 - 17:15: Seán Ó hÉigeartaigh, “Prospects for West-China cooperation on AI safety”
17:15 - 17:45: Oliver Sourbut, “Risk Modelling—and Safety Engineering?—for Loss of Control”
17:45 - 18:15: Closing ceremonies
Data and recording notice
By registering you acknowledge that Noeon Research UK Ltd (data controller) will process your personal data to administer TAIS 2026, and that:
(i) your name, affiliation and email may be shared with sponsors listed on the TAIS 2026 website at the time of the event, on the basis of our legitimate interests in supporting them. You can opt out at any time — including before the event — by emailing [email protected]; and
(ii) the conference will be filmed, photographed, audio-recorded and livestreamed, and you may appear in those recordings which may be published in any media without further notice or compensation.
See our Privacy Notice [https://noeon.ai/privacy-policy/] for your full data-protection rights.