

Mini-Symposium on Accelerating AI Safety Progress via Technical Methods
Are you working on accelerating AI safety effectiveness for existential risks? Interested in contributing to this problem, learning about current efforts, or funding active work? Join us:
📍 Location & Time (Hybrid Event):
In-person: Picasso Boardroom, 1185 6th Avenue, NYC (capacity limited to 27 attendees, **capacity has been reached/event full for in-person attendance- please only attend if you have received an email from Martin stating 'your in-person attendance is confirmed'**)
Virtual: Unlimited capacity, Google Meet link below
Date: Friday 10/10 at 4 pm EDT
(One hour before EA Global NYC 2025 opens nearby)
🎯 The Challenge:
AI capabilities are advancing rapidly
Current research literature: Many AI safety approaches may not scale beyond human-level AI
Critical question: Given catastrophic risks at stake, how can we accelerate our progress toward effective technical AI safety solutions, for future powerful AI systems which may emerge in the near-term?
🚀 Event Focus: This symposium may be a first to connect researchers, founders, funders, and forward thinkers on technical AI safety acceleration methods.
📝 Registration:
In-person registration (closed): Capacity limited to 27 attendees- in-person capacity has been reached
Even though Luma will send a 'Registration confirmed' email, please only attend in-person if you have received an email from Martin stating 'your in-person attendance is confirmed'
Virtual registration: Open- after registration please connect to the meeting with the following Google Meet link:
* Video call link: https://meet.google.com/aqi-exbu-bpw
* Or dial: (US) +1 919-729-2604 PIN: 837 901 545# More phone numbers: https://tel.meet/aqi-exbu-bpw?pin=1183057372325
🎤 Lightning Talks:
Format: 7 minutes followed by 5 minutes Q&A
Speaker list: Selected speakers visible in public sheet [here]
Post-event: Summary with speaker materials posted on LessWrong (with speaker permission)
💡 Topics/Agenda:
Accelerating discovery of effective safety solutions
Scalable effectiveness predictions for solution candidates
Automating safety research workflow steps
Technical methods to accelerate towards AI safety effectiveness for AI beyond human-level
We are looking forward to your participation!