Cover Image for Creative AI Meetup: Mutations, Wetware and Real-Time Creativity
Cover Image for Creative AI Meetup: Mutations, Wetware and Real-Time Creativity
Avatar for Creative AI Meetup
Presented by
Creative AI Meetup
Presenting new AI developments and creative applications to technologists, artists and researchers
Hosted By

Creative AI Meetup: Mutations, Wetware and Real-Time Creativity

Register to See Address
London, England
Registration
Event Full
If you’d like, you can join the waitlist.
Welcome! Please choose your desired ticket type:
About Event

​This event will host talks from artists and researchers presenting AI technologies and their creative applications. It is organised by curator Luba Elliott.

​​​​​Event schedule:

​​​​​18:00 Arrival

​​​​​18:30 Introduction by event curator Luba Elliott, venue host Gabriela Tropia and sponsor Suran Goonatilake

​​​​​18:35 William Latham, Artist and Professor at Goldsmiths & Dylan Banarse, Senior Research Engineer at Google DeepMind: Evolution and Foundation. New Organic Art Mutations Driven by AI.

​​​​​19:00 Jenn Leung, Senior Lecturer in Creative Technology & Design at UAL Assembloid Agency: Wetware meets Unreal Engine

​​​​​19:25 Aaron Jones, Founder at VideoStack.AI From Pixels to Possibilities: Real-Time AI Video, World Models and Creativity

​​​​​19:50 Talks finish, networking

​​​​20:30 Event close

Before and after the talks we will also screen films made by CSM students and alumni as part of their first AI Film Jam.

​We are grateful to our sponsors:

The Centre for Creative AI is an initiative for academia and industry to pioneer AI applications across entertainment, media, fashion, music and art.

​​​—

​​​​More on talks and speakers:

William Latham (Goldsmiths University) and Dylan Banarse (Google DeepMind): Evolution and Foundation. New Organic Art Mutations Driven by AI.

Pioneering Digital Artist William Latham (Goldsmiths Professor) and Dylan Banarse (Senior Research Engineer at Google DeepMind) will present their ongoing “Foundational Evolution” art and research project, undertaken in collaboration with mathematician Stephen Todd. Dylan Banarse first coined the term “Foundational Evolution” to describe this unique fusion of advanced AI, genetic mechanisms and unique 3D "form-growing" grammar, with Gemini, a foundation AI model trained on vast amounts of text and images.

William Latham is a pioneering UK digital artist well known for his evolutionary art created at IBM in the late eighties. After twelve years as a Creative Director in Rave Music and Computer Games, he became a Professor at Goldsmiths. Currently, William and his collaborator Stephen Todd are working with Google DeepMind to use AI to drive and steer the evolution of their organic art. This work was shown for the first time in the Evolution and Foundational AI exhibition in London in late 2025.

Dr Dylan Banarse is a Senior Research Engineer at Google DeepMind working at the intersection of AI, artificial-life and computational creativity. After his PhD on biologically-inspired neural networks he moved into industry to lead development on multiple BAFTA-nominated projects, from the Creatures series of artificial-life simulations to four series of the mixed-reality BBC TV show BAMZOOKi. He worked with William Latham on the Evolution and Foundation project. 

Jenn Leung, Senior Lecturer in Creative Technology & Design at UAL: Assembloid Agency: Wetware meets Unreal Engine

​What is it like to make games for things that are alive? Using Unreal Engine, I am building playable environments where living neurons (or human brain organoids), grown on microelectrode arrays, can sense, learn, and respond in real time. My work explores what happens when game engines become interfaces for synthetic bioengineered intelligence, and when neurons become players within digital worlds.

Jenn Leung is a creative technologist and simulation developer building game engine simulations and real-time streaming tools. Currently her research focuses on developing UE interfaces for living neurons and agent behaviour simulation. She is a Senior Lecturer in Creative Technology & Design at UAL, a researcher at LifeFabs Institute, and a Research Assistant at The Bartlett School of Architecture, UCL, working on the 100 Minds in Motion project combining EEG, eye-tracking, and movement data in an agent simulation.

Aaron Jones, Founder at VideoStack.AI From Pixels to Possibilities: Real-Time AI Video, World Models and Creativity

Generative video has been constrained by slow render times that limit creative flow. This session introduces real-time autoregressive video generation using interactive world models. We’ll examine key innovations such as 'Reward Forcing' and advanced world caching, that are propelling video synthesis beyond 23+ FPS. Moving beyond pixel prediction, these systems maintain spatial and temporal coherence, allowing real-time steering. Aaron will share insights from building and scaling production-grade video systems that prioritize human-like interaction and empathy.

Aaron Jones is a serial AI founder and creative technologist, previously CEO of Yepic AI, where he built real-time, emotionally intelligent video avatars used by global brands and governments. He’s now building VideoStack.ai, starting with Story Machine — a new kind of AI creative engine designed to help humans shape ideas, narrative, and meaning, not just generate content faster.

Location
Please register to see the exact location of this event.
London, England
Avatar for Creative AI Meetup
Presented by
Creative AI Meetup
Presenting new AI developments and creative applications to technologists, artists and researchers
Hosted By