

When to Pause AI?
There is a race to develop Artificial Super Intelligence (ASI), an AI that can learn and execute any task better than a human can, as soon as possible with minimal consideration for safety. Some people, such as Pause AI, think the risks of this current approach outweigh the potential benefits. Assuming it were possible to adopt/enforce a Pause, when would you push the Pause button? What capabilities of AI systems would need to be demonstrated to convince you a catastrophic outcome was likely, and arriving soon enough, to merit pausing?
Lets get together to read + discuss. No need to read ahead of time! In the first hour, we'll read the following together:
I'm extremely worried that superintelligent AI will kill everyone: a succinct overview of why people are worried about ASI and want to Pause a.k.a. the "doomer" case.
Common Ground between AI 2027 & AI as Normal Technology: An adversarial collaboration between AI as Normal Technology (non-doomer group) + AI 2027 (mostly doomer) which helpfully outlines the boundaries of disagreement on this issue.
After silently reading together, we'll do Reading Comprehension spot check, then engage in open discussion.
Detailed Directions to the Food Court
Enter from University Avenue and walk east until you see escalators. Take the escalators down. The food court is to the east of the escalators. If you are lost/confused, ask a security guard to direct you to the food court.