Remix.run Logo
ashirviskas a day ago

It's closer to <30k before performance degrades too much for 3.5/3.7. 200k/64k is meaningless in this context.

jerjerjer a day ago | parent [-]

Is there a benchmark to measure real effective context length?

Sure, gpt-4o has a context window of 128k, but it loses a lot from the beginning/middle.

brookst a day ago | parent | next [-]

Here's an older study that includes Claude 3.5: https://www.databricks.com/blog/long-context-rag-capabilitie...?

evertedsphere a day ago | parent | prev | next [-]

ruler https://arxiv.org/abs/2404.06654

nolima https://arxiv.org/abs/2502.05167

bigmadshoe a day ago | parent | prev [-]

They often publish "needle in a haystack" benchmarks that look very good, but my subjective experience with a large context is always bad. Maybe we need better benchmarks.