▲ | M4v3R 20 hours ago | ||||||||||||||||
Context degradation is a real problem with all frontier LLMs. As a rule of thumb I try to never exceed 50% of available context window when working with either Claude Sonnet 4 or GPT-5 since the quality drops really fast from there. | |||||||||||||||||
▲ | darkteflon 19 hours ago | parent | next [-] | ||||||||||||||||
Agreed, and judicious use of subagents to prevent pollution of the main thread is another good mitigant. | |||||||||||||||||
▲ | faangguyindia 11 hours ago | parent | prev | next [-] | ||||||||||||||||
I cap my context at 50k tokens. | |||||||||||||||||
▲ | EnPissant 20 hours ago | parent | prev [-] | ||||||||||||||||
I've never seen that level of extreme degradation (just making a small random change and repeating the same next steps infinitely) on Claude Code. Maybe Claude Code is more aggressive about auto compaction. I don't think Codex even compacts without /compact. | |||||||||||||||||
|