An experiment in collective intelligence for AI alignment
Please arrive from 5:45-6:15pm, we start at 6:15pm and close the doors shortly after.
===
It is often said that AI alignment is primarily a coordination problem. Usually when people say this they think about policy-making. This is important! But it leaves a lot on the table.
This workshop is an experiment in group coordination and coherence. What happens when we bring together a group who cares deeply about the same problem, and create an environment where connection and collaborative creativity can flourish? How does this shift our relationship to the problem? What becomes possible?
The problem we care about most is ensuring powerful AI benefits humanity and all life. In facing this problem, many of us have become isolated, burned out, oscillating between "we're all going to die" and "I alone must save the world." We reify our problems until they feel unmovable. We lose access to the creative, relational intelligence that complex challenges actually require.
Amid this fragmentation and collective stress, the most surprising and reliable resource we've found is: playfulness. It builds trust, belonging, and agency. We shift from "I carry this alone" to "we care together."
What becomes thinkable from a coherent state that wasn't thinkable from a fragmented one?
What we'll do
2½–3 hours facilitated format
Surface the tensions, fears, and stuck places we're carrying
In small groups, design and play relational games that meet what's alive in us
Return to the hard problems from a different state
What people report
A sense of hope — realistic and inspiring
Feeling nourished, more energy than before
Playfulness and aliveness
A strong sense of belonging
Stress regulation and co-regulation
Hosted by Connecting Intelligence (https://www.connectingintelligence.com/).
