| ▲ | root_axis 2 hours ago | ||||||||||||||||
Funny enough, Anthropic just went GA with 1m context claude that has supposedly solved the lost-in-the-middle problem. | |||||||||||||||||
| ▲ | SyneRyder an hour ago | parent | next [-] | ||||||||||||||||
Just for anyone else who hadn't seen the announcement yet, this Anthropic 1M context is now the same price as the previous 256K context - not the beta where Anthropic charged extra for the 1M window: https://x.com/claudeai/status/2032509548297343196 As for retrieval, the post shows Opus 4.6 at 78.3% needle retrieval success in 1M window (compared with 91.9% in 256K), and Sonnet 4.6 at 65.1% needle retrieval in 1M (compared with 90.6% in 256K). | |||||||||||||||||
| |||||||||||||||||
| ▲ | BloondAndDoom an hour ago | parent | prev [-] | ||||||||||||||||
In addition to context rot, cost matters, I think lots of people use toke compression tools for that not because of context rot | |||||||||||||||||
| |||||||||||||||||