Cover Image for Saturday AI Safety Studio
Cover Image for Saturday AI Safety Studio
Avatar for Events
Presented by
Events
Our mission is to improve the safety of advanced AI systems by empowering individual researchers and small organisations.
Hosted By
Registration
Approval Required
Your registration is subject to host approval.
Welcome! To join the event, please register below.
About Event

To address the risks posed by advanced AI, the field of AI safety must scale rapidly in the coming months and years. To date, efforts to scale the field have chiefly focused on technical programmes and fellowships geared towards researchers.

But AI safety cannot scale with technical talent alone - it needs people who can build the organisations, programmes, and infrastructure that enable this vital field to keep pace with frontier AI developments. That often entails scrappy, unguided work: designing new initiatives from scratch, building out operations at fast-scaling orgs, running field-building programmes, shaping communications and culture: working on the things that don't have a playbook yet. 

To address this gap, LISA is launching a different kind of on-ramp, built for generalists, operators, and builders who want to understand the AI safety field and test their fit by working on real problems alongside people already in it.

LISA is excited to announce AI Safety Studio - a series of structured meetups and co-working sessions for mid-career professionals who are actively looking to pivot into AI safety and want to see what it's like to work on real field problems - without quitting your job first. We will provide you with space, structure, caffeine, and support in an environment where you can meet fellow generalists and lock in to build. You will tackle real operational challenges faced by AI Safety orgs in small groups, guided by mentors from the LISA ecosystem, on problems that matter right now: hiring pipelines, field-building strategy, programme design, communications, operations.

By the end of the programme, you'll have a finished project to show for your time, together with a much more nuanced understanding of the AI safety landscape; putting you in a strong position to make your career pivot a reality.

Who are we looking for?

You're 3-8 years into your career. You might be in consulting, operations, product, marketing, policy, communications, strategy, or something else entirely. You're good at what you do - but you’ve also seen AI transforming your work life and day-to-day, and you have a feeling your skills could be useful somewhere bigger.

You don't need a technical background or a PhD-level knowledge. You bring a strong track record of execution, genuine curiosity about making AI go well, and the willingness to commit a few Saturdays to take on challenges requiring a versatile, translational generalist skillset.

We want to invest in your career journey and therefore will be selective, asking for real commitment. This isn't another lecture series or a casual networking drinks - we are bringing focused working sessions for people who are serious about exploring this space.

How do I apply?

Click below to fill in the short application form. 

Our first session will run on the 9th of May, and we will be accepting applications on a rolling basis until Thursday 7th of May. If shortlisted, we’ll invite you to a 15-min screening call to understand your interests and motivations for taking part. 

Spaces are limited to 15 per cohort.

Location
London Initiative for Safe AI (LISA)
25 Holywell Row, London EC2A 4XE, UK
Avatar for Events
Presented by
Events
Our mission is to improve the safety of advanced AI systems by empowering individual researchers and small organisations.
Hosted By