

Run Large Language Models Locally on NVIDIA DGX — High-Performance vLLM Workshop - Open Registration
Modern AI doesn’t have to live in the cloud. In this hands-on workshop, we’ll deploy a high-performance local LLM on NVIDIA DGX hardware using the vLLM inference engine and expose it as an OpenAI-compatible API for real applications.
You’ll see how large language models run in practice, explore performance and latency differences versus hosted services, and learn how local AI systems can power chat, coding tools, automation, research workflows, and offline environments. The focus is practical deployment and real capabilities — not theory.
Bring a laptop to interact with the system. No machine learning background required, but basic technical familiarity will help.
This event is hosted at the Frontier Tower:
We are transforming a 16-floor tower in San Francisco into a self-governed vertical village—a hub for frontier technologies and creative arts. Tier-one labs presenting AI, Ethereum, biotech, neuroscience, longevity, robotics, human flourishing, and arts & music. These floors will house innovators and creators pushing the boundaries of human potential in a post-AI-singularity world.
Apply here for founding citizenship: https://frontiertower.io/apply
Why should I become a citizen?
Be part of creating the first self-governed vertical village
Connect with the most creative people in the city
Get access to all floors, free event space & movement floor
Website: https://frontiertower.io/
Need more reading? Visit https://frontiertower.notion.site/