▲ | gizmodo59 5 days ago | |||||||
My take on long context for many frontier models is not about support but the accuracy drops drastically as you increase the context. Even if a model claims to support 10M context, reality is it doesn’t perform well when you saturate. Curious to hear others perspective on this | ||||||||
▲ | kridsdale3 5 days ago | parent | next [-] | |||||||
This is my experience with Gemini. Yes, I really can put an entire codebase and all the docs and pre-dev discussions and all the inter-engineer chat logs in there. I still see the model becoming more intoxicated as turn count gets high. | ||||||||
| ||||||||
▲ | vessenes 5 days ago | parent | prev [-] | |||||||
Agreed. That said, in general a 1M context model has a larger usable window than a 260k context model. |