▲ | consumer451 6 days ago | |
This is a paper which echoes your experience, in general. I really wish that when papers like this one were created, someone took the methodology and kept running with it for every model: > For instance, the NoLiMa benchmark revealed that models like GPT-4o experienced a significant drop from a 99.3% performance rate at 1,000 tokens to 69.7% at 32,000 tokens. Similarly, Llama 3.3 70B's effectiveness decreased from 97.3% at 1,000 tokens to 42.7% at 32,000 tokens, highlighting the challenges LLMs face with longer contexts. |