| ▲ | dminik 3 hours ago | |
Tbf I don't think that it's just this one reason. While I'm not a subscriber to any LLM provider, the general feeling I get from reading comments online is that the models have a long history of getting worse over time. Of course, we don't know why, but presumably they're quantizing models or downgrading you to a weaker model transparently. Now as for why, I imagine that it's just money. Anthropic presumably just got done training Mythos and Opus 4.7. that must have cost a lot of cash. They have a lot of subscribers and users, but not enough hardware. What's a little further tweaking of the model when you've already had to dumb it down due to constraints. | ||