

If Anyone Reads It, Everyone's Welcome
*****
I HAVE 10 COPIES OF THE BOOK TO GIVE AWAY! I got them for free from the publishers, so please let me know if you would like a copy.
*****
(Credit to the Ottawa Rationalists for the title of this meetup.)
We're having a little gathering to discuss Eliezer Yudkowsky and Nate Soares' newly released book, If Anyone Builds It, Everyone Dies.
We're going to try to make it through these questions suggested by the authors:
https://docs.google.com/document/d/13zm9kx8-gGTRk6e9H-4_sFfrvIbmp1utvCKcGxH_kGM/edit?usp=sharing
Here's the first section, copied for your convenience:
In the introduction, the authors say they intend to lay out a case that:
It is possible to build machine intelligence that surpasses human intelligence.
If this happens, the default outcome is human extinction.
The creation of machine superintelligence can still be prevented.
To what extent do you agree with each of these claims individually? What are some examples of strong evidence that would shift your views on each claim in one direction or another, if you saw it?
There's quite a bit to get through in 2 hours. If we don't get through it all, I'm open to planning a follow-up meetup.
You don't have to have read the whole book to attend, but it's recommend.
Hope to see you on November 11! (It's Remembrance Day, which I thought was vaguely apt.)