| ▲ | CamperBob2 3 days ago | |||||||||||||||||||||||||
For example, the fact that they can't learn from few examples For one thing, yes, they can, obviously [1] -- when's the last time you checked? -- and for another, there are plenty of humans who seemingly cannot. The only real difference is that with an LLM, when the context is lost, so is the learning. That will obviously need to be addressed at some point. that they can't perform simple mathematical operations without access to external help (via tool calling) But yet you are fine with humans requiring a calculator to perform similar tasks? Many humans are worse at basic arithmetic than an unaided transformer network. And, tellingly, we make the same kinds of errors. or that they have to expend so much more energy to do their magic (and yes, to me they are a bit magical), which makes some wonder if what these models do is a form of refined brute-force search, rather than ideating. Well, of course, all they are doing is searching and curve-fitting. To me, the magical thing is that they have shown us, more or less undeniably (Penrose notwithstanding), that that is all we do. Questions that have been asked for thousands of years have now been answered: there's nothing special about the human brain, except for the ability to form, consolidate, consult, and revise long-term memories. 1: E.g., https://arxiv.org/abs/2005.14165 from 2020 | ||||||||||||||||||||||||||
| ▲ | pegasus 3 days ago | parent [-] | |||||||||||||||||||||||||
> For one thing, yes, they can That's post-training. The complaint I'm referring to is to the huge amounts of data (end energy) required during training - which is also a form of learning, after all. Sure, there are counter-arguments, for example pointing to the huge amount of non-textual data a child ingests, but these counter-arguments are not waterproof themselves (for example, one can point out that we are discussing text-only tasks). The discussion can go on and on, my point was only that cogent arguments are indeed often presented, which you were denying above. > there are plenty of humans who seemingly cannot This particular defense of LLMs has always puzzled me. By this measure, simply because there are sufficiently impaired humans, AGI has already been achieved many decades ago. > But yet you are fine with humans requiring a calculator to perform similar tasks I'm talking about tasks like multiplying two 4-digit numbers (let's say 8-digit, just to be safe, for reasoning models), which 5th or 6th graders in the US are expected to be able to do with no problem - without using a calculator. > To me, the magical thing is that they have shown us, more or less undeniably (Penrose notwithstanding), that that is all we do. Or, to put it more tersely, they have shown you that that is all we do. Penrose, myself, and lots of others don't see it quite like that. (Feeling quite comfortable being classed in the same camp with the greatest living physicist, honestly. ;) To me what LLMs do is approximate one aspect of our minds. But I have a strong hunch that the rabbit hole goes much deeper, your assessment notwithstanding. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||