▲ | janalsncm a day ago | |
Using an LLM isn’t the worst way to rank, but it’s pretty darn slow. The speed could be improved a lot by just distilling into deep neural nets though. The results for me were fairly high quality and moderately relevant but I think they could be improved as well. You get pretty far by just blocking low quality blogspam and Medium, which would be a lot faster and could even be done on the frontend with a chrome plugin. | ||
▲ | mfkhalil a day ago | parent [-] | |
Yeah LLMs were the easiest way to get a proof of context running, but replacing it with a specialized distilled model/classifier should hopefully make it way quicker. As for the results, it's tough because we've made the deliberate decision to have no control over the reranking. What that means is that if your criteria is "written by a woman", for instance, then any result that meets that will be ranked equally at the top. In all engines I've built for myself, I have a relevance criteria that's weighted relative to how much I care that the result is exactly what I'm looking for. It's probably important to make that clearer to the end user. |