

LLM London Sessions #9
LLM London Sessions is back for 2026!
Join us at Tessl for another evening of AI conversations and demos. Whether you're building with LLMs, experimenting with prompts, or just AI-curious, pull up and hang out with other folks doing interesting things in the space. We'll have lightning talks, plenty of time to chat, and the usual good vibes.
Talk: Trustable: Building Lovable with Only Open Source
Lovable showed what delightful AI tooling looks like, but what if you need that experience without sending your code, data, and prompts to someone else's cloud? Michele built Trustable using Apache OpenServerless and OpenCode to deliver the same magic on infrastructure you fully control. In this talk he covers the real architecture decisions, the hard trade offs between demos and production, and how to orchestrate AI agents while keeping everything private. He is bringing an NVIDIA DGX Spark to demo it live on stage.
Michele Sciabarra is the CEO and Founder of Nuvolaris Inc, a company specialised in Private AI solutions. He is an O’Reilly book author and a contributor to the Apache OpenServerless and Apache OpenWhisk open source projects.
Talk: Mixture of Models next step from Mixture of Experts
Mixture of Models (MoM) is Nordlys Labs' intelligent routing system for coding, built on the insight that no single LLM is best at everything and that different models have different blind spots; by clustering real-world software engineering problems, learning which model performs best on each cluster using SWE-Bench evaluations, and then routing each new request to the most suitable "specialist" model, the approach achieves higher accuracy than any single model, reduces cost by avoiding unnecessary calls to expensive models, and creates a future-proof foundation where new models can be seamlessly integrated and automatically leveraged as they emerge.
Botir Khaltaev is a research engineer at snyk working on LLM infra and tooling, and founder at Nordlys labs building mixture of models models to achieve AGI.
Talk: Safety Testing for Clinical AI
How do you prove an AI is safe when the failure modes are unpredictable and the stakes are real? As LLMs move into healthcare, traditional testing approaches fall short. You can't unit test a conversation, and you can't road-test with real patients. In this talk, James walks through MATRIX, the simulation framework Ufonia built to stress-test their clinical AI against documented hazards before it ever talks to a real patient. He covers the regulatory thinking that shaped their approach, why simulation is ideal for safety-critical AI, and how they validated both their simulated patients and their LLM-based judge against expert clinicians.
James Godwin is Chief Product Officer at Ufonia, bringing over a decade of experience in digital health and biomedical sciences. He has led teams building digital health products used by patients and clinicians across mobile, wearables, and the web in the UK, Europe, and US.
—
This event is hosted and sponsored by Tessl.