Cover Image for Origins and metaphors of ‘AI Safety’
Cover Image for Origins and metaphors of ‘AI Safety’
Avatar for Newspeak House
Presented by
Newspeak House
The London College of Political Technology
1 Going

Origins and metaphors of ‘AI Safety’

Registration
Approval Required
Your registration is subject to approval by the host.
Welcome! To join the event, please register below.
About Event

​This session is part of the How to Think about Tech? The Case of ‘AI Safetystudy group* initiated by some of the fellow candidates of the 2025/2026 Introduction to Political Technology (https://newspeak.house/study-with-us) course. It is open only to faculty and fellowship candidates.

If the last session discussed framing ‘AI safety’ as a technical domain that should be equally preoccupied with short-term risks and more pragmatic ‘accidents’, in this session, we will explore some foundational concepts as ‘charter myths’. In anthropology, charter myths are stories that ground a society's practices, beliefs, and institutions in a foundational narrative. The AI Safety field is defined by a powerful set of concepts, such as the ‘alignment problem’, ‘superintelligence’, ‘instrumental convergence’, and ‘existential risk’. The circulation of these concepts do not simply state a (future) fact, but create a new political reality, elevating the community's core concern to the highest level of global governance and increasing its cultural and financial capital.

Recommended readings: 

Superintelligence: Paths, Dangers, Strategies - Nick Bostrom

Human Compatible: Artificial Intelligence and the Problem of Control - Stuart J. Russell 

If Anyone Builds It, Everyone Dies - Eliezer Yudkowsky and Nate Soares

LessWrong blog https://www.lesswrong.com/ 

Please come prepared to share your ideas by considering questions like these: How do concepts like 'alignment', 'superintelligence', ‘instrumental convergence’ or 'existential risk' function to build a community? How do they define who is an ‘insider’ and who is an ‘outsider’? What kind of story do these texts tell? What is the emotional resonance of these metaphors? What feelings (e.g., urgency, fear, importance, intellectual superiority) do they seem designed to evoke? How do these concepts create a ‘new political reality’? How do they justify specific actions, funding priorities, or claims to authority? And, of course, bring your own questions.  

____

* The study group aims to map and explore how ideas of ‘AI safety’ are made, circulated, and acted upon. The object of study is not the technical feasibility of artificial intelligence safety ideas or the objective probability of its risks but rather the social field of ‘AI Safety’ itself (as an epistemic community, its institutions, its system of beliefs, and its power structures). We analyze the community's texts and concepts as socio-cultural artifacts while trying to develop our own thinking about how ‘responsible AI’ can be implemented in practice.

Location
Newspeak House
133 Bethnal Grn Rd, London E2 7DG, UK
Avatar for Newspeak House
Presented by
Newspeak House
The London College of Political Technology
1 Going