| ▲ | ANN v3: 200ms p99 query latency over 100B vectors(turbopuffer.com) |
| 77 points by _peregrine_ 4 days ago | 27 comments |
| |
|
| ▲ | jascha_eng 4 days ago | parent | next [-] |
| This is legitimately pretty impressive. I think the rule of thumb is now, go with postgres(pgvector) for vector search until it breaks, then go with turbopuffer. |
| |
| ▲ | sa-code 4 hours ago | parent | next [-] | | Qdrant is also a good default choice, since it can work in-memory for development, with a hard drive for small deployments and also for "web scale" workloads. As a principal eng, side-stepping a migration and having a good local dev experience is too good of a deal to pass up. That being said, turbopuffer looks interesting. I will check it out. Hopefully their local dev experience is good | | |
| ▲ | benesch 3 hours ago | parent | next [-] | | For local dev + testing, we recommend just hitting the production turbopuffer service directly, but with a separate test org/API key: https://turbopuffer.com/docs/testing Works well for the vast majority of our customers (although we get the very occasional complaint about wanting a dev environment that works offline). The dataset sizes for local dev are usually so small that the cost rounds to free. | | |
| ▲ | lambda 8 minutes ago | parent | next [-] | | > although we get the very occasional complaint about wanting a dev environment that works offline It's only occasional because the people who care about dev environments that work offline are most likely to just skip you and move on. For actual developer experience, as well as a number of use cases like customers with security and privacy concerns, being able to host locally is essential. Fair enough if you don't care about those segments of the market, but don't confuse a small number of people asking about it with a small number of people wanting it. | |
| ▲ | enigmo an hour ago | parent | prev | next [-] | | having a local simulator (DynamoDB, Spanner, others) helps me a lot for offline/local development and CI. when a vendor doesn't off this I have often end up mocking it out (one way or another) and have to wait for integration or e2e tests for feedback that could have been pushed further to the left. in many CI environments unit tests don't have network access, it's not purely a price consideration. (not a turbopuffer customer but I have been looking at it) | | |
| ▲ | benesch 12 minutes ago | parent [-] | | > in many CI environments unit tests don't have network access, it's not purely a price consideration. I've never seen a hard block on network access (how do you install packages/pull images?) but I am sympathetic to wanting to enforce that unit tests run quickly by minimizing/eliminating RTT to networked services. We've considered the possibility of a local simulator before. Let me know if it winds up being a blocker for your use case. | | |
| ▲ | lambda 7 minutes ago | parent [-] | | > how do you install packages/pull images You pre-build the images with packages installed beforehand, then use those image offline. |
|
| |
| ▲ | sroussey 3 hours ago | parent | prev [-] | | That’s not local though |
| |
| ▲ | nostrebored 3 hours ago | parent | prev [-] | | Qdrant is one of the few vendors I actively steer people away from. Look at the GitHub issues, look at what their CEO says, look at their fake “advancements” that they pay for publicity on… The number of people I know who’ve had unrecoverable shard failures on Qdrant is too high to take it seriously. |
| |
| ▲ | _peregrine_ 3 days ago | parent | prev | next [-] | | seems like a good rule of thumb to me! though i would perhaps lump "cost" into the "until it breaks" equation. even with decent perf, pg_vector's economics can be much worse, especially in multi-tenant scenarios where you need many small indexes (this is true of any vector db that builds indexes primarily on RAM/SSD) | |
| ▲ | jauntywundrkind an hour ago | parent | prev [-] | | I'd love to know how they compare versus MixedBread, what relative strengths each has. https://www.mixedbread.com/ I really really enjoy & learn a lot from the mixedbread blog. And they find good stuff to open source (although the product itself is closed). https://www.mixedbread.com/blog I feel like there's a lot of overlap but also probably a lot of distinction too. Pretty new to this space of products though. |
|
|
| ▲ | mmaunder 5 hours ago | parent | prev | next [-] |
| For those of us who operate on site, we have to add back network latency, which negates this win entirely and makes a proprietary cloud solution like this a nonstarter. |
| |
| ▲ | benesch 3 hours ago | parent [-] | | Often not a dealbreaker, actually! We can spin up new tpuf regions and procure dedicated interconnects to minimize latency to the on-prem network on request (and we have done this). When you're operating at the 100B scale, you're pushing beyond the capacity that most on-prem setups can handle. Most orgs have no choice but to put a 100B workload into the nearest public cloud. (For smaller workloads, considerations are different, for sure.) |
|
|
| ▲ | kgeist 5 hours ago | parent | prev | next [-] |
| Are there vector DBs with 100B vectors in production which work well? There was a paper which showed that there's 12% loss in accuracy at just 1 mln vectors. Maybe some kind of logical sharding is another option, to improve both accuracy and speed. |
| |
| ▲ | lmeyerov 3 hours ago | parent | next [-] | | I don't know at these scales, but at the 1M-100M, we found switching from out-of-box embeddings to fine-tuning our embeddings gave less of a sting in the compression/recall trade-off . We had a 10-100X win here wrt comparable recall with better compression. I'm not sure how that'd work with the binary quantization phase though. For example, we use Matroyska, and some of the bits matter way more than others, so that might be super painful. | |
| ▲ | jasonjmcghee 3 hours ago | parent | prev | next [-] | | So many missing details... Different vector indexes have very different recall and even different parameters for each dramatically impact this. HNSW can have very good recall even at high vector counts. There's also the embedding model, whether you're quantizing, if it's pure rag vs hybrid bm25 / static word embeddings vs graph connections, whether you're reranking etc etc | |
| ▲ | _peregrine_ 5 hours ago | parent | prev [-] | | the solution described in the blog post is currently in production at 100B vectors | | |
|
|
| ▲ | alanwli an hour ago | parent | prev | next [-] |
| Out of curiosity, how is the 92% recall calculated? For a given query, is the recall compared to the true topk of all 100B vectors vs. recall at each of N shards compared to the topk of each respective shard? |
| |
| ▲ | nvanbenschoten an hour ago | parent [-] | | (author here) The 92% mentioned in this post is showing recall@10 across all 100B vectors, calculated by comparing to the global top_k. turbopuffer will also continuously monitor production recall at the per-shard level (or on-demand with https://turbopuffer.com/docs/recall). Perhaps counterintuitively, the global recall will actually be better than the per-shard recall if each shard is asked for its own, local top_k! |
|
|
| ▲ | lmeyerov 6 hours ago | parent | prev | next [-] |
| Fun! I was curious given the cloud discussion - a quick search suggests default AWS SSD bandwidth is 250 MB/s, and you can pay more for 1 GB/s. Similar for s3, one http connection is < 100 MB/s, and you can pay for more parallel connections. So the hot binary quantized search index is doing a lot of work to minimize these both for the initial hot queries and pruning later fetches. Very cool! |
|
| ▲ | montroser 3 hours ago | parent | prev | next [-] |
| This is at 92% recall. Could be worse, but could definitely be much better. Quantization and hierarchical clustering are tricks that lead to awesome performance at the cost of extremely variable quality, depending on the dataset. |
|
| ▲ | shayonj 6 hours ago | parent | prev | next [-] |
| v cool and impressive! |
|
| ▲ | redskyluan 3 hours ago | parent | prev [-] |
| Using Hierarchical Clustering significantly reduces recall; this is a solution we used and abandoned three years ago. |