

Book 4 → If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (Eliezer Yudkowsky & Nate Soares)
Registration
About Event
Theme: AI alignment / control
Presents a stark, argument-by-parable case that creating a superhuman AI without absolute safety guarantees will lead to human extinction—a warning framed as non-negotiable.
The authors are the OG figures in AI safety, urgently argue that this book may be the most important book of our time, painting a world where we're speeding toward catastrophe unless we change course.