| ▲ | EnPissant 5 hours ago | |||||||
Compaction is just what Claude Code has done forever, right? | ||||||||
| ▲ | GardenLetter27 5 hours ago | parent | next [-] | |||||||
I think the point here is not that it does compaction (which Codex also already does) - but that the model was trained with examples of the Codex compaction, so it should perform better when compaction has taken place (a common source for drops in performance for earlier models). | ||||||||
| ||||||||
| ▲ | enraged_camel 5 hours ago | parent | prev [-] | |||||||
I am also trying to understand the difference between compaction, and what IDEs like Cursor do when they "summarize" context over long-running conversations. Is this saying that said summarization now happens at the model level? Or are there other differences? | ||||||||