▲ | bigmadshoe 4 days ago | |||||||||||||
▲ | joenot443 3 days ago | parent | next [-] | |||||||||||||
This is a good piece. Clearly it's a pretty complex problem and the intuitive result a layman engineer like myself might expect doesn't reflect the reality of LLMs. Regex works as reliably on 20 characters as it does 2m characters; the only difference is speed. I've learned this will probably _never_ be the case with LLMs, there will forever exist some level of epistemic doubt in its result. When they announced Big Contexts in 2023, they referenced being able to find a single changed sentence in the context's copy of Great Gatsby[1]. This example seemed _incredible_ to me at the time but now two years later I'm feeling like it was pretty cherry-picked. What does everyone else think? Could you feed a novel into an LLM and expect it to find the single change? | ||||||||||||||
| ||||||||||||||
▲ | dang 3 days ago | parent | prev [-] | |||||||||||||
Discussed here: Context Rot: How increasing input tokens impacts LLM performance - https://news.ycombinator.com/item?id=44564248 - July 2025 (59 comments) |