▲ | Show HN: Model2vec – Lightning-fast Static Embeddings for RAG/Semantic Search(github.com) | |||||||
28 points by Pringled 8 days ago | 4 comments | ||||||||
We’ve recently open-sourced Model2vec, a method to distill sentence transformers into static embeddings that outperform all previous approaches by a large margin on MTEB. Our new models set a new state-of-the-art for static embeddings. Main features: - Our best model (potion-base-8M) has only 8M parameters, which is ~30mb on disk - Inference is ~500x faster than the distilled base model (bge-base), on a CPU - New models can be distilled in 30 seconds on a CPU without requiring a dataset - just a vocabulary - Numpy-only inference: The packaged can be install the package with minimal dependencies for lightweight deployments - The library is integrated in SentenceTransformers, making it easy to use with other popular libraries We built this because we think static embeddings can provide a hardware friendly alternative to many of the larger embedding models out there, while still being performant enough to power usecases such as RAG, or semantic search. We are curious to hear your feedback on this and whether there’s any usecases you can think of that we have not explored yet! Link to the code and results: https://github.com/MinishLab/model2vec | ||||||||
▲ | bturtel 6 days ago | parent | next [-] | |||||||
This seems awesome for enabling RAG queries for on-device LLMs. | ||||||||
▲ | jerpint 7 days ago | parent | prev | next [-] | |||||||
I wonder at what point it will be ~as much overhead to pass through a subset of the data with a small yet capable and fast LLM vs. using a crude dot product when doing retrieval | ||||||||
| ||||||||
▲ | protoshell248 7 days ago | parent | prev [-] | |||||||
10K embeddings generated in under 700 milliseconds!!! |