Cover Image for Where Are the People? Everyday AI Safety
Cover Image for Where Are the People? Everyday AI Safety
1 Going

Where Are the People? Everyday AI Safety

Hosted by Alexandra Ciocanel & sugaroverflow
Registration
Approval Required
Your registration is subject to approval by the host.
Welcome! To join the event, please register below.
About Event

​This session is part of the How to Think about Tech? The Case of ‘AI Safety’ study group, initiated by some of the fellow candidates of the 2025/2026 Introduction to Political Technology (https://newspeak.house/study-with-us) course. It is open only to faculty and fellowship candidates.

While much AI safety research focuses on foundational issues like model alignment, robustness, and catastrophic risks, there is growing attention on practical harms, including hate speech, abuse, political bias manipulation, bioweapon creation, and self-harm. Traditionally, AI safety research relies on audits and controlled experiments. However, real-world harm often arises from the complex interactions between technologies, people, and institutions in specific societies. This session, therefore, will take people seriously by exploring how AI safety risks are present in our daily lives, examining public concerns and the unpredictable ways harm can manifest in everyday practices.

Some of the questions we’ll ponder on: What counts as an AI safety risk, and who gets to define it? In what ways can we consider LLM outputs as political technology? Are people’s concerns about AI aligned with the risks emphasised in academic, policy, and corporate cycles? How do technological imaginaries travel across different social and political contexts? How do people respond to technological systems when they feel powerless or exploited? In what ways can fiction help us to anticipate or understand AI technical risks compared to technical or policy writing? How would AI safety look like if rooted in local community needs and interests? 

Recommended readings 

Cory Doctorow - Radicalised 

Social Change Lab - AI Safety Movement Report: Mapping Civil Society Response to AI Risks

Abeba Birhane - Algorithmic Colonization of Africa 

Jilian Fisher et al - Biased LLMs can Influence Political Decision-Making - ACL Anthology 

New study: AI chatbots systematically violate mental health ethics standards | Brown University 

Ada Lovelace - AI Survey | Overview 

Watch: 

Documentary: Marginalised Aadhaar (2021) 

In order to have a fruitful discussion please read at least one tale from Doctorow’s book and one article.

Location
Newspeak House
133 Bethnal Grn Rd, London E2 7DG, UK
1 Going