

Event: Guest Talk – Prof. Dr. Michael Hahn
🧠✨ GUEST TALK ANNOUNCEMENT
We’re excited to invite you to an upcoming AI Safety Saarland Guest Talk exploring how we can understand what’s really going on inside Large Language Models.
🎙️ Guest Talk: Understanding LLMs via Interpretability and Theory We are delighted to host Prof. Dr. Michael Hahn, who will dive into the reasoning capabilities of large language models and the limits of current Transformer architectures.
In this talk, you’ll get insights into:
🔍 Interpretability: How we can analyze and make sense of internal model representations
🧠 Reasoning & Limits: What LLMs can (and cannot) truly reason about
⚖️ Safety Implications: Why understanding internals matters for alignment and trustworthy AI
📍 Event Details
📅 February 06, 2026
📌 Room: E1.7 Sr 0.01
🍕 Free Food & Drinks (because good ideas need good fuel)
⏰ Schedule
Expert Talk by Prof. Dr. Michael Hahn
Q&A and open discussion
Networking with food & drinks
⚡ Why You Should Come If you’re interested in AI safety, interpretability, theory, looking for PHD position or hiwis, or just how LLMs actually work under the hood, this talk is for you.
AI Safety Saarland