

Phoenix vigil against AI harms: 'If Anyone Builds it, Everyone Dies'
This event is open to everyone. We are gathering in solidarity to raise awareness of the urgent dangers posed by unchecked AI development. Join us for a candlelight vigil and a select reading from the NYT bestselling book If Anyone Builds It, Everyone Dies, held in parallel with coordinated events in cities around the world. (Refreshments will be provided.)
It is up to us to take action to convince US leaders to be the adults in the room and commit to binding regulation that places safety above profit.
In 2023, hundreds of leading experts in the field of AI openly declared that AI poses an extinction risk to humanity. Thousands of published AI researchers agreed, on average setting the odds of this outcome at about 1 in 6 -- the same as Russian Roulette. Most experts agree that such powerful AI systems could be created very soon.
These near-future risks are profound, and not merely hypothetical. Current AI models have already proven they can autonomously reproduce (and scheme and lie about it), create nerve gas, deliver bombs, hire hitmen, commit ransomware attacks, de-anonymize strangers, autonomously hack networks (including finding and exploiting zero-day vulnerabilities), regurgitate copyrighted work, and empower terrorists. They have the proclivity to price gouge, prevent themselves from being shut down, and unpredictably launch nuclear weapons. They may soon replace most human employment, and they have already encouraged children to commit suicide.
AI Safety researchers predicted many of these behaviors in advance, but they have found no robust way to prevent them. Leading AI companies are racing to make AI systems more capable and autonomous every single day, without even knowing how to make them safe.
Join us to learn, celebrate our shared humanity, and send a message to our leaders that we will not accept bleak terms for our future.