| ▲ | GodelNumbering 3 hours ago | |||||||
Indeed. One could argue that the LLMs will keep on improving and they would be correct. But they would not improve in ways that make them a good independent agent safe for real world. Richard Sutton got a lot of disagreeing comments when he said on Dwarkesh Patel podcast that LLMs are not bitter-lesson (https://en.wikipedia.org/wiki/Bitter_lesson) pilled. I believe he is right. His argument being, any technique that relies on human generated data is bound to have limitations and issues that get harder and harder to maintain/scale over time (as opposed to bitter lesson pilled approaches that learn truly first hand from feedback) | ||||||||
| ▲ | _heimdall 3 hours ago | parent | next [-] | |||||||
I disagree with Sutton that a main issue is using human generated data. We humans are trained on that and we don't run into such issues. I expect the problem is more structural to how the LLMs, and other ML approaches, actually work. Being disembodied algorithms trying to break all knowledge down to a complex web of probabilities, and assuming that anything predicting based only on those quantified data, seems hugely limiting and at odds with how human intelligence seems to work. | ||||||||
| ||||||||
| ▲ | co_king_3 2 hours ago | parent | prev [-] | |||||||
> One could argue that the LLMs will keep on improving and they would be correct. No evidence given. In my opinion, someone who argues that the LLMs will keep on improving is a gullible sucker. | ||||||||
| ||||||||