▲ | datadrivenangel 3 days ago | |
Paper and repo do not mention routing latency, which I think is a concern. Also the paper has some pie chart crimes on page 6. | ||
▲ | NitpickLawyer 3 days ago | parent [-] | |
Just from a brief look at the repo they seem to be doing semantic embeddings w/ Qwen3-Embedding-8B, which should be in the high thousands pp t/s on recent hardware. With a sufficiently large dataset after using it for a while you could probably fine-tune a smaller model as well (4B and 0.6B available from the same family) |