| ▲ | eaf7e281 7 hours ago | |
I think they changed the quantification to save computer power for their new model. This might be why the benchmark scores look good, but the real world performance is much worse. I'm wondering if they're testing the model internally and didn't find anything wrong with the new parameter. I canceled my subscription and switched to a codex, but it's not as good. I'm tired of Anthropic changing things all the time. I use Claude because it doesn't redirect you to a different model like OpenAI does. But now it seems like both companies are doing the same thing in different way. | ||
| ▲ | throwaway2027 7 hours ago | parent [-] | |
Claude is worse, they don't tell you when your experience has degraded and don't even let you use worse models if you run out any. | ||