

SGLang Office Hours - Scaling LLM Serving with Ray and SGLang
βπ£ Scaling LLM Serving with Ray and SGLang
βSGLang Office Hours is back!
βWe're co-hosting with Anyscale to dig into how Ray powers large-scale LLM serving with SGLang.
βXinyu Zhang (@xinyzng), MTS at Anyscale, opens with a deep dive into the Ray executor backend inside SGLang: why Ray is needed, how it improves RL workload placement, and what you get from Ray cluster integration.
βJeffrey Wang (@jeffreyycwang), SWE at Anyscale, covers how Ray Serve fits into large-scale SGLang deployments, the roadmap, and where the community can contribute. We'll close with a live demo and walkthrough of features open for contribution.
βWhether you're serving at scale or just getting started with the Ray integration, come with your questions.
LinkedIn Living stream π SGLang Office Hour
Youtube Living stream π SGLang Office Hour
Join SGLang Slack π http://slack.sglang.ai/
Following us on X π https://x.com/lmsysorg
βIf this helps you, please consider giving us a star β it truly motivates the team. β https://github.com/sgl-project/sglang