Cover Image for TurboQuant in Qdrant Explained
Cover Image for TurboQuant in Qdrant Explained
Avatar for Qdrant
Presented by
Qdrant
Hosted By

TurboQuant in Qdrant Explained

Zoom
Registration
Welcome! To join the event, please register below.
About Event

TurboQuant is now in Qdrant. In this live technical session, we'll show you what it is, how it works, and why to migrate from SQ or Binary Quantization through a single config change.

What We'll Discuss

The quantization landscape has a new option. TurboQuant adds a new path alongside Scalar Quantization (SQ) and Binary Quantization (BQ), with operating points at 8x, 16x, and 32x compression. It consistently provides higher recall than BQ at every storage class.

In this webinar, you'll learn and participate in discussion on:

  • What TurboQuant is and how it differs from SQ and BQ

  • What Qdrant adds on top of the original Google Research algorithm: length re-normalization, per-coordinate anisotropy compensation, L2/dot support, and the integer-arithmetic scoring path

  • How it benchmarks across four public datasets: arxiv-instructorxl, dbpedia-openai3-large, dbpedia-openai3-small, and wiki-cohere-v3

  • When to use it and when to stay on SQ or BQ

  • How to enable it with a config change on a new or existing collection

Who Should Attend

This session is built for engineers running vector workloads in production. It'll be most useful if you're currently using SQ or BQ, evaluating memory reduction strategies, or want to understand how quantization affects recall on real embedding models.

Avatar for Qdrant
Presented by
Qdrant
Hosted By