| ▲ | johnfn 3 hours ago | |||||||||||||||||||||||||||||||
The section on "artificially low costs" does not make a lot of sense to me. If anything I feel like the costs are inflated for the frontier models, not "artificially low". Easy proof: GLM-5 costs about 1/10 as much as Opus. I'm not going to tell you it's as good as Opus 4.6 -- it's not -- but it performs comparably to where frontier models were 6 months ago. (It's on par with Sonnet 4.5 on leaderboards, though in practice it's probably closer to Sonnet 4.0.) If I can switch to an open source model today, run it myself, and spend 1/10 as much as Opus, and get to about where frontier models were 6 months ago, fear-mongering about how we'll have to weather "orders-of-magnitude price hikes" and arguing that that one shouldn't even bother to learn how to use AI at all seems disconnected from reality. Who cares about the "shady accounting" OpenAI is doing, or that AI labs are "wildly unprofitable"? I can run GLM 5 right now, forever, for cheap. | ||||||||||||||||||||||||||||||||
| ▲ | piker 3 hours ago | parent [-] | |||||||||||||||||||||||||||||||
The post is factoring in training costs, not just inference. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||