The One About Alignment And Trustworthiness in Agentic AI Systems
AI systems are being deployed faster than we know how to govern them. This AI Wednesdays, we look at two sides of that problem: whether AI actually understands the cultures it serves, and whether we can trust AI agents to act safely when no one is watching. Two speakers, one uncomfortable question. Are we building AI we can actually rely on?
More About the Sharings
Dr Eric J. W. Orlowski (Research Fellow, NUS AI Institute) will share on "Cultural data isn’t culture; culture is culture: Rebalancing cultural alignment methodologies”
Cultural alignment is one of the harder challenges in deploying large language models across different communities and contexts. Most current approaches rely on proxies like nationality, language, or benchmark scores, which can be useful but often make culture look more fixed and measurable than it really is.
This session examines why that gap matters and how to think about it more carefully. Rather than a technical protocol, it offers a framework for scoping cultural alignment to specific deployment settings, combining technical evaluation with qualitative insight, and asking better questions about what "culturally useful" actually means in practice. (Technical Level 200)
Matthias Chin (Founder and CEO of CloudsineAI) will share on "Trustworthiness in Agentic AI Systems: Visibility, Traceability and Control”.
AI agents are no longer experimental. They are in production, writing code, executing trades, and managing infrastructure autonomously, yet 69% of deployed agents have zero security monitoring.
Unlike traditional software, agentic systems are unpredictable by design. They spawn sub-agents dynamically, inherit credentials across systems, and act on inputs that can be manipulated in real time. Every old security assumption breaks.
This session argues that securing agentic AI requires three things working together: visibility into every agent and tool call across your environment, traceability through audit logs that capture not just what an agent did but why, and control through guardrails that enforce least privilege and flag high-impact actions for human approval. Closing the gap between what agents are supposed to do and what they actually do in the wild is what trustworthy agentic AI looks like in practice. (Technical Level 200)
More About the Speakers
Dr Eric J. W. Orlowski is a Research Fellow at the NUS AI Institute (NAII) whose work sits at the intersection of AI governance, policy, and the practical realities of deploying AI systems across different social and institutional contexts. Trained in Science and Technology Studies and ethnographic research, he has worked across ASEAN with stakeholders ranging from grassroots communities and MSMEs to governments, as well as with governance ecosystems in the UK and Nordics, with a consistent focus on making AI governance practical, usable, and supportive of innovation. At NAII, he currently leads research on cultural alignment in AI, examining how teams can think more clearly about what it means for an AI system to be truly "culturally aligned" when existing benchmarks and proxy measures often fall short of capturing context, interpretation, and real-world use.
Matthias Chin is the Founder and CEO of CloudsineAI, a startup focused on protecting organisations from generative AI threats, bringing over two decades of cybersecurity expertise building scalable security systems for government, healthcare, and financial institutions. A SembCorp scholar and Honours graduate from the University of Toronto, he holds a range of credentials spanning AI and cybersecurity including CISSP, CCIE, and certifications from SANS and DeepLearning.AI. A sought-after speaker who has presented at Black Hat Asia's inaugural AI Summit, Matthias also serves on the Industry Advisory Board at SUTD and the Examination Board at Ngee Ann Polytechnic, where he actively shapes the future of AI security across Southeast Asia and beyond.
More About the Series
AI Wednesdays is Lorong AI’s weekly gathering, bringing together practitioners, researchers and innovators for technical discussions on research insights, product development and engineering practices.
Get involved: Learn more about Lorong AI | Speaker Sign-up | WhatsApp Community | LinkedIn | X
