| ▲ | _heimdall 3 hours ago | |||||||||||||||||||||||||||||||
I think what your getting at is basically the idea that LLMs will never be "intelligent" in any meaningful sense of the word. They're extremely effective token prediction algorithms, and they seem to be confirming that intelligence isn't dependent solely on predicting the next token. Lacking measured responses is much the same as lacking consistent principles or defining ones own goals. Those are all fundamentally different than predicting what comes next in a few thousand or even a million token long chain of context. | ||||||||||||||||||||||||||||||||
| ▲ | GodelNumbering 3 hours ago | parent [-] | |||||||||||||||||||||||||||||||
Indeed. One could argue that the LLMs will keep on improving and they would be correct. But they would not improve in ways that make them a good independent agent safe for real world. Richard Sutton got a lot of disagreeing comments when he said on Dwarkesh Patel podcast that LLMs are not bitter-lesson (https://en.wikipedia.org/wiki/Bitter_lesson) pilled. I believe he is right. His argument being, any technique that relies on human generated data is bound to have limitations and issues that get harder and harder to maintain/scale over time (as opposed to bitter lesson pilled approaches that learn truly first hand from feedback) | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||