▲ | You Don't Need Re-Ranking: Understanding the Superlinked Vector Layer(superlinked.com) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
22 points by softwaredoug 8 hours ago | 14 comments | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | janalsncm 7 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I don’t think the author understands the purpose of reranking. During vector retrieval, we retrieve documents in sublinear time from a vector index. This allows us to reduce the number of documents from potentially billions to a much smaller number. The purpose of re-ranking is to allow high powered models to evaluate docs much more closely. It is true that we can attempt to distill that reranking signal into a vector index. Most search engines already do this. But there is no replacement for using the high powered behavior based models in reranking. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | rooftopzen 6 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
>"When it comes to vector search, it's not just about matching words. Understanding the meaning behind them is equally important." This statement ^ is clearly incorrect on its premise -semantic meaning is already vectorized, and the problems with that are old news and have little to do w indexing. I went through the article though, and realized the company is probably on its last legs - an effort that was interesting 2 years ago for about a week, but funded by non-developers without any gauge of reality. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | AmazingTurtle 7 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
At everfind.ai, we've found a middle ground that leverages both structured and unstructured data effectively in retrieval systems. We utilize a linear OpenSearch index for chunked information but complement this by capturing structured metadata during ingestion—either via integrations or through schema extraction using LLMs. This structured metadata allows us to take full advantage of OpenSearch's field-type capabilities. At retrieval time, our approach involves a broad "prefetching" step: we quickly identify the most relevant schemas, perform targeted vector searches within these schemas, and then rerank the top results using the LLM before agentic reasoning and execution. The LLM is provided with carefully pre-selected tools and fields, empowering it to dive deeper into prefetched results or explore alternate queries dynamically. This method significantly boosts RAG pipeline performance, ensuring both speed and relevance. Additionally, by limiting visibility of the "agentic execution context" to just the current operation span and collapsing it in subsequent interactions, we keep context sizes manageable, further enhancing responsiveness and scalability. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | ccleve 6 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Is there a paper or some other explanation of what they're doing under the hood? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | petesergeant 8 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> The key idea here is that with Superlinked, your search system can understand what you want and adjust accordingly. I read as much of this article as I could be bothered to and still didn’t really understand how it removes the need for reranking. It starts talking about mixing vector and non-vector search, so ok fine. Is there any signal here or is it all marketing fluff? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|