▲ | overgard 7 days ago | |
I'm not a fan of the argument that LLMs have gotten X times better in the past few years, so thusly they will continue to get X times better in the next few years. From what I can see, all the growth has mostly come from optimizing a few techniques, but I'm not convinced that we aren't going to get stuck in a local maxima (actually, I think that's the most likely outcome). Specifically, to me the limitation of LLMs is discovering new knowledge and being able to reason about information they haven't seen before. LLMs still fail at things like counting the number of b's in the word blueberry or not getting distracted by inserting random cat facts in word problems (both issues I've seen appear in the last month) I don't mean that to say they're a useless tool, I'm just not into the breathless hype. | ||
▲ | AstroBen 6 days ago | parent [-] | |
Relevant: https://xkcd.com/605/ The latest releases are seeing smaller and smaller improvements, if any. Unless someone can explain the technical reasons why they're likely to scale to being able to do X then it's a pretty useless claim |