| ▲ | data-ottawa 4 hours ago | |
Search is AI now, so I don’t get what your argument is. Since 2019 Google and Bing both use BERT style encoder-only search architecture. I’ve been using Kagi ki (now research assistant) for months and it is a fantastic product that genuinely improves the search experience. So overall I’m quite happy they made these investments. When you look at Google and Perplexity this is largely the direction the industry is going. They’re building tools on other LLMs and basically running open router or something behind the scenes. They even show you your token use/cost against your allowance/budget in the billing page so you know what you’re paying for. They’re not training their own from-scratch LLMs, which I would consider a waste of money at their size/scale. | ||
| ▲ | VHRanger 3 hours ago | parent [-] | |
We're not running on openrouter, that would break the privacy policy. We get specific deals with providers and use different ones for production models. We do train smaller scale stuff like query classification models (not trained on user queries, since I don't even have access to them!) but that's expected and trivially cheap. | ||