| ▲ | jorvi 2 hours ago | |||||||||||||||||||||||||
Prices are not going up. DeepSeek V4 Pro is 5-10x cheaper than Claude 4.7. As some of us have been predicting, model capability had already mostly plateaued, and the Chinese have and will continue to relentlessly push cost down. Chinese models will be used for 95% of things, with nation-native models for security/sovereignity-sensitive workloads. Eventually (5+ years from now), efficiency gains and hardware progress will make running local models the dominant way of doing things. And yes, that puts the investors of Claude and OpenAI in quite a pickle. | ||||||||||||||||||||||||||
| ▲ | robotswantdata 2 hours ago | parent | next [-] | |||||||||||||||||||||||||
I want the frontier on prem to be true but IMO not good enough yet unless async. What started as all-you-can-eat $50 buffet has quietly become a $6k bill, frontier models that don’t ship your codebase to Beijing don’t come cheap anymore. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | coffeefirst an hour ago | parent | prev [-] | |||||||||||||||||||||||||
Yes, that’s basically what I’m talking about. Without subsidies, that’s a lot of incentive to use more efficient/open weight models, but also to use them in ways that are less compute intensive—fewer tokens, shorter reasoning chains, less nonstop, and generally tie up less hardware for a lot less time. | ||||||||||||||||||||||||||