| ▲ | redlock 20 hours ago | |
“But clearly the difference between LLMs in 2025 and 2023 is not as large as between 2023 and 2021.” This is a ridiculous statement. A simple example of the huge difference is context size. GPT-4 was, what, 8K? Now we’re in the millions with good retention. And this is just context size, let alone reasoning, multimodality, etc. | ||
| ▲ | Anamon 18 hours ago | parent | next [-] | |
I don't think that refutes the point. I'd readily agree with the parent that in terms of actual usefulness and efficiency gains, we're on a trajectory of diminishing returns. | ||
| ▲ | emp17344 16 hours ago | parent | prev [-] | |
Gemini’s 2M context window is kind of a gimmick and not useable in practice. | ||