▲ | CuriouslyC 3 days ago | |||||||||||||||||||||||||||||||||||||||||||
We already have "sparse" embeddings. Google's Matryoshka embedding schema can scale embeddings from ~150 dimensions to >3k, and it's the same embedding with layers of representational meaning. Imagine decomposing an embedding along principle components, then streaming the embedding vectors in order of their eigenvalue, kind of the idea. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | miven 3 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
Correct me if I'm misinterpreting something in your argument but as I see it Matryoshka embeddings just sort the vector bases of the output space roughly by order of their importance for the task, PCA-style, so when you truncate your 4096-dimensionnal embedding down to a set of let's say 256 dimensions, those are the exact same 256 vector bases doing the core job of encoding important information for each sample, so you're back to dense retrieval on 256-dimensional vectors, just that all the minor miscellaneous slack useful for a very low fraction of queries has been trimmed away. True sparsity would imply keeping different important vector bases for different documents, but MRL doesn't magically shuffle vector bases around depending on what's your document contains, were that the case cosine similarity between the resulting documents embeddings would simply make no sense as a similarity measure. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | jxmorris12 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||
Matryoshka embeddings are not sparse. And SPLADE can scale to tens or hundreds of thousands of dimensions. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
▲ | 3abiton 3 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
Doesn't PCA compress the embeddings in this case, ie reduce the accuracy? It's similar to quantization. | ||||||||||||||||||||||||||||||||||||||||||||
|