| ▲ | GodelNumbering 3 hours ago |
| This highlights an important limitation of the current "AI" - the lack of a measured response. The bot decides to do something based on something the LLM saw in the training data, quickly u-turns on it (check the some hours later post https://crabby-rathbun.github.io/mjrathbun-website/blog/post...) because none of those acts are coming from an internal world-model or grounded reasoning, it is bot see, bot do. I am sure all of us have had anecdotal experiences where you ask the agent to do something high-stakes and it starts acting haphazardly in a manner no human would ever act. This is what makes me think that the current wave of AI is task automation more than measured, appropriate reactions, perhaps because most of those happen as a mental process and are not part of training data. |
|
| ▲ | _heimdall 3 hours ago | parent [-] |
| I think what your getting at is basically the idea that LLMs will never be "intelligent" in any meaningful sense of the word. They're extremely effective token prediction algorithms, and they seem to be confirming that intelligence isn't dependent solely on predicting the next token. Lacking measured responses is much the same as lacking consistent principles or defining ones own goals. Those are all fundamentally different than predicting what comes next in a few thousand or even a million token long chain of context. |
| |
| ▲ | GodelNumbering 3 hours ago | parent [-] | | Indeed. One could argue that the LLMs will keep on improving and they would be correct. But they would not improve in ways that make them a good independent agent safe for real world. Richard Sutton got a lot of disagreeing comments when he said on Dwarkesh Patel podcast that LLMs are not bitter-lesson (https://en.wikipedia.org/wiki/Bitter_lesson) pilled. I believe he is right. His argument being, any technique that relies on human generated data is bound to have limitations and issues that get harder and harder to maintain/scale over time (as opposed to bitter lesson pilled approaches that learn truly first hand from feedback) | | |
| ▲ | _heimdall 3 hours ago | parent | next [-] | | I disagree with Sutton that a main issue is using human generated data. We humans are trained on that and we don't run into such issues. I expect the problem is more structural to how the LLMs, and other ML approaches, actually work. Being disembodied algorithms trying to break all knowledge down to a complex web of probabilities, and assuming that anything predicting based only on those quantified data, seems hugely limiting and at odds with how human intelligence seems to work. | | |
| ▲ | GodelNumbering 2 hours ago | parent [-] | | Sutton actually argues that we do not train on data, we train on experiences. We try things and see what works when/where and formulate views based on that. But I agree with your later point about training such a way is hugely limiting, a limit not faced by humans |
| |
| ▲ | co_king_3 2 hours ago | parent | prev [-] | | > One could argue that the LLMs will keep on improving and they would be correct. No evidence given. In my opinion, someone who argues that the LLMs will keep on improving is a gullible sucker. | | |
| ▲ | _heimdall an hour ago | parent [-] | | Someone arguing that LLMs will keep improving may be putting too much weight behind expecting a trend to continue, but that wouldn't make them a gullible sucker. I'd argue that LLMs have gotten noticeably better at certain tasks every 6-12 months for the last few years. The idea that we are at the exact point where that trend stops and they get no better seems harder to believe. |
|
|
|