

Designing trustworthy genAI with knowledge graphs
This session dives into the tools and techniques that make it possible to build structured analytical workflows with GenAI, using Knowledge Graphs and full auditability as core components. It’s an opportunity to unpack the systems-level design choices—like schema-first workflows, graph-grounded retrieval, and traceable reasoning—that support GenAI outputs being more consistent, contextual, and inspectable.
We'll work through how to build a Risk Foresighting and Gap Analysis tool using AI Agents.
There’s growing agreement that Knowledge Graphs meaningfully improve GenAI performance. But translating that into production systems, especially in regulated contexts, raises deeper questions about design, reliability, and trust. What are the best patterns emerging now? What’s still missing from the stack? And how do we evaluate the trade-offs involved? This session is for those looking to engage directly with these challenges and help shape how trustworthy AI gets built.
Felix Barbalet is a data expert and entrepreneur with over 15 years of experience driving IT transformation and data-centric IT systems in both the public and private sectors. He is now building the Trust Layer for Entperise AI
AI CoLab events are intentionally open and collaborative. We capture photos, audio, and AI-generated transcripts so we can remix key insights for the AI CoLab Alliance community. This is always done in line with the Charter’s values; transparent, ethical innovation and knowledge sharing to accelerate collective learning (see join.aicolab.org). By participating in an AI CoLab event, you agree to your contributions being captured and shared in accordance with this policy.