

AI Hallucination
LLMs generate factually incorrect outputs (hallucinations) that look perfectly plausible, and detecting these requires analysing semantic uncertainty at meaning-level rather than token-level—a capability most observability platforms lack.
*Kindly note: This event will have limited entry for students. So if you are a student, you will be allowed entry on first come first serve basis only. (Apologies in advance!)
10:00-10:30: Introduction
10:30-11:00: Karan Shingde, ML Engineer @AiHello
11:00-11:30: Jagminder Sehrawat, CoFounder @Famli
11:30-12:00: Shubham Mhaske, AI Engineer @Genzeon
12:00-12:30: Round table discussion with all attendees sharing their knowledge, challenges and solutions about hallucination detection
12:30 onwards: Open Networking & Refreshments