

aiLights - Inside the BIAS Project: How societal biases are reflected in language models
Research has shown that the prejudices of our society can also be reflected in AI models, or even reinforced by them. This poses risks when using such systems.
The BIAS project, funded under the Horizon Europe research program, is investigating how this is the case in AI applications in the field of HR and in common language models. The aim is to identify how this bias can be recognized and, on the other hand, how it can be reduced or dealt with.
The project focuses on applications in various European languages and is interdisciplinary, with technical, social science, and legal perspectives.
Speaker:
Prof. Mascha Kurpicz-Briki
Bern University of Applied Sciences & BIAS Project
Mascha Kurpicz-Briki is an AI expert and professor in computer science. She is the co-lead of the Applied Machine Intelligence research group and the Generative AI Lab at the Bern University of Applied Sciences in Biel, Switzerland, and is working on applied natural language processing (NLP) and machine learning for different use cases.
As keynote speaker, author and founder of Makubri Technologies Sàrl she additionally works on creating formats that make artificial intelligence and digital technologies understandable and engaging.
📅 Date & Time: 18.05.2026, 9:00-9:45 (CET)
⏱️ Duration: 45 minutes
🌐 Language: English
🗣️ Format: Presentation + Q&A
📍 Location: Online (Zoom + YouTube Livestream)
👥 Audience: Broad public, HR professionals, AI developers, policy makers
🔗 More information about aiLights: https://ailights.org/
We thank our partner Oertli-Stiftung for the support.
If you have any questions, please reach out to:
📧 [email protected]