Cover Image for BSmAI: AI Safety - Are safety frameworks only good as the people inside them?
Cover Image for BSmAI: AI Safety - Are safety frameworks only good as the people inside them?
Avatar for BehSci Meets AI
Presented by
BehSci Meets AI
Curating a community and events to explore the applications of Behavioural Science to AI

BSmAI: AI Safety - Are safety frameworks only good as the people inside them?

Register to See Address
London, England
Registration
Approval Required
Your registration is subject to host approval.
Welcome! To join the event, please register below.
About Event

The Human Factor: Are AI Safety Frameworks Only As Good As the People Inside Them?

AI safety discussions around what regulation should say and whether humans should oversee AI systems are accelerating, yet they often remain abstract or technical.

Behavioural science reframes both questions: asking not just what frameworks require, but whether the humans and institutions within them are equipped to deliver on those requirements. And where they aren't, who bears the risk?

This topic grounds safety in how people actually interact with AI, making it highly relevant for professionals who sit at the intersection of AI and human behaviour - whether you're building products, shaping policy, or researching how AI systems interact with the people who use them.

If you've ever shipped a transparency feature and wondered whether users actually read it, questioned whether "human oversight" holds up under cognitive load and commercial pressure, or felt that the governance conversation was missing something fundamental - this is the room for you.

We're bringing together product leaders, behavioural researchers, AI policy professionals and risk practitioners to have the conversation that too often gets skipped: not just what AI safety should look like, but what it takes for humans to deliver on it in practice.

The panelists for this event:

This event will explore key themes such as

  • The limits of transparency, warnings and user choice

  • Trust and over reliance on fluent AI systems

  • How AI reshapes information environments and collective behaviour

  • Behavioural drivers of misinformation, manipulation and exploitation

  • Power asymmetries and incentive structures in AI deployment

  • Behaviourally informed approaches to governance and guardrails

You will likely enjoy this event if you are interested in

  • Human factors in AI oversight - attention, trust, cognitive bias and institutional incentives

  • The psychology of trust and over reliance on AI systems

  • How behavioural science can strengthen (or expose the limits of) AI governance frameworks

  • The gap between how AI safety is designed and how it plays out in practice

  • Evidence-based approaches to regulation that account for how people actually behave

  • What responsible product development looks like when users are predictably fallible

Event format

  • In-person, discussion-led evening with a strong emphasis on networking, connecting people working across AI safety, technology, research, and policy

  • Short introductory remarks from each speaker to ground the discussion

  • Moderated Q&A panel, drawing connections across perspectives and real-world challenges

Open conversation and networking throughout, with time to connect informally after the panel

Location
Please register to see the exact location of this event.
London, England
Avatar for BehSci Meets AI
Presented by
BehSci Meets AI
Curating a community and events to explore the applications of Behavioural Science to AI