| ▲ | Trufa 3 hours ago | |||||||||||||||||||||||||||||||||||||
Can you be more specific than this? does it vary in time from launch of a model to the next few months, beyond tinkering and optimization? | ||||||||||||||||||||||||||||||||||||||
| ▲ | tedsanders 2 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
Yeah, happy to be more specific. No intention of making any technically true but misleading statements. The following are true: - In our API, we don't change model weights or model behavior over time (e.g., by time of day, or weeks/months after release) - Tiny caveats include: there is a bit of non-determinism in batched non-associative math that can vary by batch / hardware, bugs or API downtime can obviously change behavior, heavy load can slow down speeds, and this of course doesn't apply to the 'unpinned' models that are clearly supposed to change over time (e.g., xxx-latest). But we don't do any quantization or routing gimmicks that would change model weights. - In ChatGPT and Codex CLI, model behavior can change over time (e.g., we might change a tool, update a system prompt, tweak default thinking time, run an A/B test, or ship other updates); we try to be transparent with our changelogs (listed below) but to be honest not every small change gets logged here. But even here we're not doing any gimmicks to cut quality by time of day or intentionally dumb down models after launch. Model behavior can change though, as can the product / prompt / harness. ChatGPT release notes: https://help.openai.com/en/articles/6825453-chatgpt-release-... Codex changelog: https://developers.openai.com/codex/changelog/ Codex CLI commit history: https://github.com/openai/codex/commits/main/ | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | joshvm 2 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
My gut feeling is that performance is more heavily affected by harnesses which get updated frequently. This would explain why people feel that Claude is sometimes more stupid - that's actually accurate phrasing, because Sonnet is probably unchanged. Unless Anthropic also makes small A/B adjustments to weights and technically claims they don't do dynamic degradation/quantization based on load. Either way, both affect the quality of your responses. It's worth checking different versions of Claude Code, and updating your tools if you don't do it automatically. Also run the same prompts through VS Code, Cursor, Claude Code in terminal, etc. You can get very different model responses based on the system prompt, what context is passed via the harness, how the rules are loaded and all sorts of minor tweaks. If you make raw API calls and see behavioural changes over time, that would be another concern. | ||||||||||||||||||||||||||||||||||||||