

Modular at NVIDIA GTC 2026
Writing high-performance GPU code is unreasonably hard, and it's getting worse. The bar for peak TFLOPS keeps rising, hardware keeps changing, and only a handful of engineers on the planet can write this code well.
Modular is changing that. Meet us at NVIDIA GTC 2026 and watch Mojo and MAX push NVIDIA Blackwell GPUs to their limits, with code you can actually read and maintain.
What we're showing at our booth
State-of-the-Art GPU Performance with Mojo and MAX
Live GPU programming on NVIDIA Blackwell: matmuls, generative AI model serving, and more.
"Yours is the only booth I saw actually showing us how to program GPUs." — GTC attendee, 2025
Porting a CUTLASS Blackwell Conv2D Kernel to Mojo
See how Mojo's structured kernel architecture and AI-assisted development made it possible to port a complex CUDA C++ kernel in a single session, resulting in cleaner code that runs 6.6x faster than cuDNN on B200 GPUs.
DeepSeek V3 on B200 via Modular Cloud
DeepSeek V3 running live on NVIDIA B200 GPUs, served by the MAX stack with Mojo kernels. Real throughput and latency numbers for text and code generation workloads.
FLUX.2-dev Image Generation on B200
Submit a prompt, get a high-quality image back fast. BFL's FLUX.2-dev diffusion model running on B200 with the full MAX serving stack, optimized end-to-end for throughput, latency, and cost per image.
Why stop by?
Watch live GPU kernel code running on Blackwell hardware
See state-of-the-art inference benchmarks for LLMs and diffusion models
Talk to the engineers who built the kernels
Learn how Mojo and MAX can replace your custom CUDA stack
Explore Modular Cloud for fully managed, scalable AI endpoints
Stay in the loop
RSVP to this event to receive updates on scheduled demo times and special sessions during GTC (and maybe a chance to meet Chris Lattner). We'll send you a heads-up before our most popular demos so you don't miss them.
Find us on the expo floor
Modular - Booth #3004 NVIDIA GTC | San Jose, CA
Interested in a deeper conversation about deploying AI at scale? Book a meeting with our team.