

AI Safety Needs Generalists — Here's How to Get In
AI safety's biggest talent bottleneck right now is generalists--program managers, operators, chiefs of staff, fieldbuilders, founders. Two recent analyses converge on the same diagnosis: research postings attract dozens of qualified applicants, while non-research postings often surface only a few.
The work is high-leverage and the bar is high, but the pathway in is largely informal. Referrals and ecosystem fluency do most of the work, which leaves people approaching from outside the community without a clear route in.
This panel features people doing exactly this kind of work at different high-impact AIS organizations.
Panelists
Joping Chai — People & Operations Manager at Apollo Research, the London-based AI safety org focused on detecting deceptive alignment. Joping joined Apollo from outside the AI safety community, after roles at BCG X, Shopee, and Singapore's Public Service Division.
Helena Tran — Program Manager at Constellation, running the Generator Residency this summer — a new three-month residency in Berkeley designed explicitly to address the generalist talent gap, scaffolding early-career generalists into AI safety roles. Previously founded AI Safety at UC Irvine.
Yordanos Asmare — Leads People Operations at FAR.AI, a US-based AI safety research lab. Yordanos joined FAR.AI after over a decade scaling talent and operations across venture capital, startups, and NGOs, including as Head of Recruiting at Liftoff Mobile and Head of Talent & Partnerships at A2SV. BA in English Literature from Stanford
Moderator
Oliver Kurilov — Field-builder at the London Initiative for Safe AI (LISA), where he runs programming for professionals pivoting into AI safety. Governance Fellow at the Cambridge AI Safety Hub. Took an intermission from his Columbia PhD to work on AI safety full-time, after stints in deeptech VC and building a health-tech startup at Cambridge Enterprise.
What we'll cover
What ops, programs, and special projects work actually looks like day-to-day across a frontier safety lab, a residency program, and a London hub
How each panelist got into the field, and what they'd tell someone trying to break in now
The kinds of generalist backgrounds AI safety orgs are hiring for, and the "am I EA-fluent enough" question
Open roles across the panelists' orgs and adjacent programs
Followed by audience Q&A via Slido. Recording will be posted to BlueDot's YouTube.