

AI Singapore Symposium on The Right to Work, Learn, Own & Choose
Artificial intelligence now permeates everyday life, shaping how people learn, work, create, and choose, offering powerful assistance while quietly reshaping human agency. AI development should hence be framed as a socio-technical challenge not only because of its societal impacts, but because core technical properties such as opacity, homogenisation, and persuasive fluency can erode meaningful human discretion. This symposium brings together experts from AI governance and technical research in AI to collaboratively discuss research directions to guide researchers, developers, and policymakers toward balancing human agency with AI assistance—without framing the relationship as a zero-sum tradeoff.
This symposium facilitates knowledge sharing, expert discussion, and stakeholder engagement, to synergise collaboration between experts in AI governance and technical research in AI to develop AI assistants that respect human agency and uphold our rights* to work, learn, own, and choose.
Speakers:
The Organizational AI Efficiency Paradox by Prof. Jungpil Hahn
Prof. Jungpil Hahn is a Provost's Chair Professor in the Department of Information Systems at the School of Computing in the National University of Singapore (NUS). He is also the Director of the NUS Fintech Lab, the Deputy Director of AI Governance at AI Singapore, and the Deputy Director of the Centre for Technology, Robotics, Artificial Intelligence & the Law. His research focuses on open innovation, organisational learning and knowledge management, software development processes, software project management, and human-computer interaction.
The Right to Work and the Right to Learn: AI for Adult Learning and Online Education by Prof. Ashok K. Goel
Prof. Ashok K. Goel is a professor of Computer Science and Human-Centered Computing in the School of Interactive Computing at Georgia Institute of Technology and the chief scientist with Georgia Tech’s Center for 21st Century Universities. He has been awarded the Robert S. Engelmore Memorial Lecture Award at the AAAI 2026 conference for his pioneering research contributions to biologically inspired design, case-based reasoning and applications of AI in virtual teaching, as well as for his extensive contributions to AAAI, including his service as Editor in Chief of AI Magazine.
Towards Copyright Aware Language Modeling by Prof. Luke Zettlemoyer
Prof. Luke Zettlemoyer is a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, and a Senior Research Director at Meta. His research focuses on empirical methods for natural language semantics, and involves designing machine learning algorithms, introducing new tasks and datasets, analyzing model performance, and, most recently, studying how to best develop self-supervision signals for pre-training. His honors include being elected as ACL President (2024), named an ACL Fellow (2021), as well as winning a PECASE award (2016), an Allen Distinguished Investigator award (2014), and many best paper awards.
The Right to Think: How Thoughtfully Soft AI can Help by Dr Nancy F. Chen
Dr Nancy F. Chen leads the Multimodal Generative AI Group and the AI for Education Programme at the Institute for Infocomm Research (I2R), Agency for Science, Technology, and Research (A*STAR), Singapore. She is also a principal investigator at Centre for Frontier AI Research (CFAR), A*STAR. Her research team works on multimodal, multilingual large language models with targeted applications in education, healthcare and defense. Their technology has spawned multiple commercial spin-offs and has been deployed by the Ministry of Education. She is an ISCA Fellow, AAIA Fellow, A*STAR Fellow, and has served as program chair for NeurIPS 2025 and ICLR 2023.
From Emergence to Evaluation: Understanding Theory of Mind, Persuasion, and Power Asymmetries in Intelligent Agents by Dr. Djallel Bouneffouf
Dr. Djallel Bouneffouf is a senior research scientist at IBM Research, Yorktown Heights. He has dedicated many years to the field of online machine learning and data mining, with a primary research focus on developing autonomous systems that can learn, adapt, and make decisions in uncertain environments. His work spans both the public and private sectors, contributing to advancements in artificial intelligence, reinforcement learning, and trustworthy AI. Over the past decade at IBM in the USA and Ireland, he has contributed to a wide range of projects involving reinforcement learning, brain modeling, and the development of AI systems with enhanced trustworthiness and interpretability.
Moderator and Co-Host
Prof. Simon Chesterman is a David Marshall Professor and Vice Provost (Educational Innovation) at the National University of Singapore (NUS) where he is also the founding Dean of NUS College. He serves as a Senior Director of AI Governance at AI Singapore and Editor of the Asian Journal of International Law. Previously, he was the Dean of NUS Law from 2012 to 2022 and Co-President of the Law Schools Global League from 2021 to 2023.
Host
Assoc. Prof. Bryan Low is an Associate Professor of Computer Science and Associate Vice President (AI) at the National University of Singapore, the Director of AI Research at AI Singapore, and the Deputy Director of NUS AI Institute. His research interests include automated ML/AI, data-centric AI for LLMs, agents in ML, and AI for Science.
*The rights framing is intended to align with widely recognised AI ethics principles synthesised in global surveys on normative expectations of users of AI, and not to propose new legal human rights.