Cover Image for Building AI Products People Can Trust
Cover Image for Building AI Products People Can Trust
38 Going
Private Event

Building AI Products People Can Trust

Hosted by Ladies That UX Lisbon
Registration
2 Spots Remaining
Hurry up and register before the event fills up!
Welcome! To join the event, please register below.
About Event

AI products can tick all the compliance boxes… and still fail when they reach real users.

Why? Because risk doesn’t live only in the model — it surfaces at the interaction layer, where real user behaviour, edge cases and drift undermine trust over time.

Drawing on 17 years across product strategy, behavioural research, and AI governance in global technology organisations, Sara Portell walks through common ways AI products break down in production. She will introduce a practical framework for catching these failure modes early, translating regulatory and ethical requirements into product-level decisions, and building the governance foundation needed to sustain trust beyond launch.

What we’ll talk about

  • How designers and researchers can prepare for AI-related challenges

  • What it really means to work with AI in practice

  • How AI connects with research, behaviour, and decision-making


Event details

🗓️ NEW DATE 👉 May 13th - 18:30
📍 Nagarro Lisbon Office - R. Cap. Leitão 21
🗣️ Language: English


Agenda

  • 30 min talk

  • 15 min Q&A

  • Networking & chats at the end ✨

About Sara

Sara is a behavioural scientist and AI strategy and ethics practitioner with 17 years of experience helping organisations understand how people interact with AI, and what that means for product design, risk, and responsible innovation.

Her career spans across world's leading technology companies, including Shopify, Expedia, F-Secure, and Unit4, where she has worked at the intersection of behavioural research,UX, product strategy, and organisational change. She brings a combination of academic depth and hands-on practice: holding an MSc in Strategy and International Business (ESSEC) and an MSc in Behavioural Science (LSE), she is currently completing doctoral research in Psychology and AI at Universidade Católica Portuguesa.

Sara is an Oxford-certified AI ethicist and an ISO/IEC 42001 AI Management Systems auditor. She is the founder of Human-Centric Responsible AI and co-founder of the AI Ethics Consortium, initiatives dedicated to helping organisations navigate where AI risk emerges in practice, and how to build AI experiences that are safe, ethical, and regulation-ready.


See you there! 🙌

Location
Nagarro
R. Cap. Leitão 21, 1950-050 Lisboa, Portugal
38 Going