Cover Image for The One About Embodied AI (Round II)
Cover Image for The One About Embodied AI (Round II)
Avatar for Lorong AI
Presented by
Lorong AI

The One About Embodied AI (Round II)

Registration
Approval Required
Your registration is subject to approval by the host.
Welcome! To join the event, please register below.
About Event

While embodied AI shows tremendous promise in combining perception, reasoning, and action, the gap between laboratory success and real-world deployment remains significant. Explore why technical sophistication alone isn't enough for deployment success, how to make robots better at communicating with humans, and see autonomous systems in action.

More About the Sharings

  • Jia Yi Goh (Data Scientist, GovTech) will share "Why Embodied AI Alone Won’t Get Robots Out of the Lab". Embodied AI brings a new level of autonomy to robots by combining perception, reasoning, and motion in one loop. Yet in practice, successful deployment still depends on engineering fundamentals such as hardware reliability, sensor stability, and navigation accuracy. Jia Yi will share more beyond the hype to examine where embodied AI adds value, where it falls short, and why its success alone cannot translate into dependable field deployment – illustrated with a case study on deploying computer vision-powered robots in real-world settings. (Technical Level: 100)

  • Dr Yang Jianfei (Assistant Professor, NTU) will present recent advances toward making LLM-powered robots more robust and human-aware. Through studies on ambiguous commands (REI-Bench) and noisy language (NoisyEQA), explore methods for intent clarification, noise detection, and self-correction to create intelligent robotic systems that adapt to imperfect communication, improving safety and accessibility in applications like children and elderly care. (Technical Level: 200)

  • Vincent (Software Engineer, KLASS) will share insights on the use of micro Unmanned Aerial Vehicles (UAVs) for surveying indoor environments under Beyond Visual Line of Sight (BVLOS) operations. His presentation will cover how these UAVs can generate detailed 3D maps of indoor spaces in real time with visualizations. He will also demonstrate how, upon completing a mission, the UAV can autonomously navigate back to its take-off point while intelligently avoiding obstacles. (Technical level: 300)

More About the Speakers

  • Jia Yi Goh is a Data Scientist at GovTech Singapore, where she leads safety testing initiatives within the Responsible AI team. Her work focuses on making AI systems more trustworthy and dependable for public deployment. Her interest in technology began at a young age and evolved into a passion for AI during her undergraduate studies. She began her career on GovTech’s Video Analytics team, developing and deploying real-world computer vision applications. Today, she is driven by the challenge of staying at the forefront of AI while ensuring its safe and responsible use in the real world.

  • Dr Jianfei Yang is an Assistant Professor at NTU and Director of the Multimodal AI and Robotic Systems (MARS) Lab. Previously a researcher at UC Berkeley and Harvard, his work focuses on multimodal perception and reasoning for human-centric embodied AI, empowering robots to perceive, understand, and interact with the physical world. He has been recognised as Forbes 30 Under 30 (2024) and Stanford Top 2% Scientist, and has won first place in over ten international AI competitions.

  • Vincent is a Robotics Software Engineer at KLASS with a Bachelor’s in Computer Engineering from the NUS. With over four years of experience spanning industrial and aerial robotics, he focuses on developing and deploying autonomous systems using C, C++, and Python. Driven by a strong passion for autonomous navigation, he is dedicated to creating practical and intelligent robotics solutions.

Psst....Interested in becoming a speaker for our sessions? Sign up here!

Location
Lorong AI (WeWork@22 Cross St.)
Avatar for Lorong AI
Presented by
Lorong AI