Cover Image for The One About LLMs and Hallucinations II (ft. OCBC AI Lab)
Cover Image for The One About LLMs and Hallucinations II (ft. OCBC AI Lab)
Avatar for Lorong AI
Presented by
Lorong AI

The One About LLMs and Hallucinations II (ft. OCBC AI Lab)

Registration
Event Full
If you’d like, you can join the waitlist.
Please click on the button below to join the waitlist. You will be notified if additional spots become available.
About Event

How do we build reliable AI systems when LLMs inevitably hallucinate? Join us to explore both sides of the challenge from treating knowledge as code with rigorous engineering practices to understanding the theoretical limits of what AI can guarantee. Discover how to balance AI capability with reliability in real-world deployments.

More About the Sharings

  • Alejandro (Data Scientist, GovTech) will share more about “How Knowledge Management Became the Key to Powering GenAI Solutions and introduce the "KnowledgeOps" approach that treats knowledge with the same rigor as code - complete with version control, testing cycles, and production monitoring. Drawing from his AIBots platform experience, discover practical strategies for organising enterprise knowledge, the critical role of knowledge curators, and why treating LLMs as "smart parrots" leads to better outcomes than viewing them as authoritative sources. (Technical Level: 100)

  • Claire Gong (Senior Data Scientist, OCBC AI Lab) will be giving a comprehensive overview of OCBC’s enterprise chatbot/AI assistant, Buddy, and share about how they build it and use it to boost producitivity with AI at OCBC.(Technical Level: 100)

  • Ziwei Xu (Research Fellow, NUS) will share more about his paper, “Hallucination is Inevitable: An Innate Limitation of Large Language Models”. Hear more about why hallucination in LLMs is theoretically inevitable - not a bug to be fixed, but a fundamental limitation. Learn which problems are inherently hallucination-prone, why current mitigation strategies have theoretical limits, and what this means for safe AI deployment in critical applications. (Technical Level: 200)

More About the Speakers

  • Alejandro is a Physics PhD who pivoted into digitalization and smart manufacturing. For the last 3 years he’s been uplifting Singapore government agencies from within as a Data Scientist in the Forward Deployed Team in the GovTech AI Practice. He now sits in the Multimodal team within the GovTech AI Practice.

  • Claire Gong is a Senior Data Scientist (Assistant Vice President) in the AI-as-a-Service team at GDO, OCBC. She focuses on applying Generative AI solutions to support the bank’s innovation efforts. Claire holds a Master’s degree in Artificial Intelligence from Nanyang Technological University and brings prior experience from technology startups. Her work spans areas such as data science, large language model fine-tuning, and practical GenAI applications.

  • Dr Ziwei Xu is a postdoctoral researcher at National University of Singapore. He received his Ph.D. in computer science from NUS in 2023 under Professor Mohan Kankanhalli. His research focuses on knowledge-enhanced machine learning—integrating symbolic methods with ML models—and AI safety and trustworthiness. His work spans sequential relation modelling, visual relation detection, video understanding, and natural language processing, with broader interests in AI's societal impact.

Location
Lorong AI (WeWork@22 Cross St.)
Avatar for Lorong AI
Presented by
Lorong AI