| ▲ | hodgehog11 2 days ago | |
Okay, fine, let's remove the evolution part. We still have an incredible amount of our lifetime spent visualising the world and coming to conclusions about the patterns within. Our analogies are often physical and we draw insights from that. To say that humans only draw their information from textbooks is foolhardy; at the very least, you have to agree there is much more. I realise upon reading the OP's comment again that they may have been referring to "extrapolation", which is hugely problematic from the statistical viewpoint when you actually try to break things down. My argument for compression asserts that LLMs see a lot of knowledge, but are actually quite small themselves. To output a vast amount of information in such a small space requires a large amount of pattern matching and underlying learned algorithms. I was arguing that humans are actually incredible compressors because we have many years of history in our composition. It's a moot point though, because it is the ratio of output to capacity that matters. | ||
| ▲ | vrighter 10 hours ago | parent [-] | |
They can't learn iterative algorithms if they cannot execute loops. And blurting out an output which we then feed back in does not count as a loop. That's a separate invocation with fresh inputs, as far as the system is concerned. They can attempt to mimic the results for small instances of the problem, where there are a lot of worked examples in the dataset, but they will never ever be able to generalize and actually give the correct output for arbitrary sized instances of the problem. Not with current architectures. Some algorithms simply can't be expressed as a fixed-size matrix multiplication. | ||