

Existential risks and emerging risks
Registration
About Event
What could go catastrophically wrong in the 21st century?
We explore risk form misaligned AI, engineered pathogens, great powers conflict and other existential threats. In this session we aim to understand why some researchers see these as unusually high‑stakes and neglected.
We connect philosophical longtermism (Session 3) to concrete policy and research questions.
Suggested Readings (30–45 min total max)
Read one primary + skim others:
AI risk (power‑seeking systems) — 80,000 Hours
Engineered pandemics — 80,000 Hours
Great powers conflicts — 80,000 Hours
Optional
Catastrophic climate change — 80,000 Hours
Nuclear weapons — 80,000 Hours
Post‑Session Extra Depth
The precipice — Toby Ord
Location