▲ | og_kalu 4 hours ago | |||||||||||||||||||||||||
No inference is pretty cheap, and a lot of things point to that being true. - Prices of API access of Open models from third-party providers who would have no motive to subsidize inference - Google says their median query is about as expensive as a google search Thing is, what you're saying would have been true a few years ago. This would have all been intractable. But llm inference costs have quite literally been slashed several orders of magnitude in the last couple of years. | ||||||||||||||||||||||||||
▲ | menaerus 3 hours ago | parent [-] | |||||||||||||||||||||||||
You would probably understand if you knew how LLMs are run in the first place but, as ignorant as you are (sorry), I have no interest in debating this with you anymore. I tried to give a tractable clue which you unfortunately chose to counter-argue with non-facts. | ||||||||||||||||||||||||||
|