Cover Image for PyAI @ AIE London Meetup
Cover Image for PyAI @ AIE London Meetup
Avatar for PyAI
Presented by
PyAI
131 Going
Registration
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

Join us for an evening of AI discussions featuring talks from:

  • Marlene Mhangami: Senior Developer Advocate specialising in Python & AI @ Microsoft

  • Samuel Colvin: creator & CEO @ Pydantic

  • Shifra Williams: DevRel @ Render

  • ​Pablo Galindo Salgado: Python Core developer & theoretical physicist (black holes specialist) @ HRT

  • David Hewitt: Creator of PyO3 and OSS contributor

We aim to start the talks at 6pm. See you there!


Marlene Mhangami

A Practical Guide to Agentic Coding

AI Agents have become increasingly good at generating code. Developers who know how to use agentic tools as they program can increase their productivity significantly. In this session, Marlene Mhangami shares how she gets the most out of agents in her development workflow using Pydantic AI and GitHub Copilot in the CLI and VS Code. She'll walk through how she uses MCP (Model Context Protocol), Agent Skills and Instructions to create semi-autonomous agents that can complete multi-step tasks end-to-end. We'll walk through practical, Python code that leverages MCP, custom agents and skills to:

  • Build Custom agents in code and with an agents.md file

  • Work with the M365 suite to get Outlook emails 

  • Validate code changes with the Playwright MCP server

This talk is a hands-on case study of agentic coding in action — showing the core mental model (planning + tool use + validation), effective patterns with today's SDKs, common pitfalls, and how these techniques apply to dev workflows.


Samuel Colvin

Controlling the wild: from tool calling to computer use

There's a continuum from traditional tool calling through to full computer use, with interesting options at every point along it. This talk is about one particular answer: Monty, a sandboxed Python interpreter built for AI agents.

Come watch my code fail in microseconds.


Shifra Williams

What your AI pipeline does when you're not looking

AI pipelines are only as good as your ability to see inside them. LLM calls are slow and surprisingly opaque, and without the right observability, debugging quality issues or runaway costs is pure guesswork.

In this talk, Shifra demos a RAG pipeline built with Pydantic AI, deployed on Render, and instrumented end-to-end with Logfire. The pipeline runs questions through eight stages, from embedding and hybrid retrieval to dual-model evaluation and a quality gate that automatically iterates until the answer is good enough. Every token spent, every millisecond, every evaluator disagreement is captured and queryable.

We'll walk through how Pydantic AI's structured outputs pair with Logfire's auto-instrumentation to give you full visibility into distributed traces, dual-model scoring with OpenAI and Anthropic, self-correcting loops, and SQL-queryable telemetry for cost analysis and proactive alerting.

This is a practical case study in what observable AI actually means in Python: instrumentation that tells you why your pipeline is slow, where it's spending money, and when it's quietly producing bad answers. The full stack runs on Render, so you'll also see what it looks like to actually ship an AI app in production.

Come watch an LLM evaluate itself in real time.


Pablo Galindo Salgado


David Hewitt

Thoughts of LLM Contributions to OSS (lightning talk)

Location
Atomico
29 Rathbone St, London W1T 1NJ, UK
Avatar for PyAI
Presented by
PyAI
131 Going