

Inworld AI x GMI Cloud: Get the Most Out of Your Voice AI
Join us for a live workshop on how to get the most out of Inworld’s voice AI platform.
We’ll cover practical tips for building more natural voice experiences, improving latency and quality, and making the most of features like text-to-speech, speech-to-speech, and LLM routing.
This session is livestreamed—bring your use cases or anything you’re stuck on, and drop your questions or comments in our community.
Whether you're prototyping or already in production, you’ll leave with ideas you can apply immediately.
9:00 Cale demos TTS 2.0
9:10 Roan demos using Inworld's API key
9:20 Open discussion and Q&A
Inworld exists to empower developers to confidently architect and deploy realtime AI applications at massive scale. We have the #1-ranked voice AI model with human-like expression and realtime sub-200ms latency that feels like a real conversation, at a fraction of the cost of other providers.
Cale Shapera is a Senior Staff Engineer at Inworld who supports customers with adopting and building AI experiences in their products and platforms
GMI Cloud is a GPU-powered AI infrastructure platform — and NVIDIA's 1/7 reference cloud partner. We give teams the compute, tooling, and infrastructure to build, deploy, and scale AI applications that actually run in production.
Roan Weigert leads developer relations at GMI Cloud, where he helps creators and AI builders bring their ideas to life on GPU-powered infrastructure. In this session, he'll walk through how to integrate Inworld's voice AI APIs using GMI Cloud's inference platform, from a working API key to a running demo in under 10 minutes.