▲ | soci 7 hours ago | |||||||
> Fundamentally, I believe in the importance of learning from a stream of interactive experience, as humans and animals do, which is quite different from the throw-everything-in-a-blender approach of pretraining an LLM. The blender approach can still be world-changingly valuable, but there are plenty of people advancing the state of the art there. It's a shame that pretrained approach leads to such good enough result. The learning-from-experience, or what should be the "right" approach, will stagnate. I might be wrong, but it seems that aside from Carmack and a small team, "the world" is just not looking/investing on that side of the AI anymore. However, I find it funny that Carmack is now researching for such approach. At the end of the day, he was the one who invented Portals, an algorithm to circumvent the need to reproduce the whole 3D world and therefore making 3D games computationally possible. As a side note, I wonder what models are to come once we see the latest state of the art AI Video training technologies, in synch with the joystick movements from a real player. Maybe the results are so astonishing that even Carmack changes his mind on the subject. EDIT::grammar & typos | ||||||||
▲ | tshaddox 7 hours ago | parent | next [-] | |||||||
> It's a shame that pretrained approach leads to such good enough result. The learning-from-experience, or what should be the "right" approach, will stagnate. We’ll see. I’m skeptical that you’ll ever get novel theories like special and general relativity out of LLMs. For stuff like that I suspect you need the interactive learning approach, and perhaps more importantly, the ability to reject the current best theories and invent a replacement. | ||||||||
▲ | vlovich123 7 hours ago | parent | prev | next [-] | |||||||
I’m not necessarily convinced despite my human bias that it’s a superior mechanism. Humans work the way they do and learn the way they do in no small part because of biological limitations and a physical reality. It’s not clear that a virtual entity needs to face the same limitations, although clearly learning from feedback that’s not available to an AI is important. It is true though that humans are more energy efficient learners, but letting the AI experiment with the real world and get feedback that’s way may be the only missing piece rather than a problem with the “blender” approach. | ||||||||
▲ | anthonypasq 7 hours ago | parent | prev | next [-] | |||||||
i think you're overstating this. Yann LeCun (chief scientist at Meta) is firmly in this camp, and i think most companies trying to bring AI into the real world via some sort of robotics technology are thinking about and testing this approach. | ||||||||
| ||||||||
▲ | koolala 7 hours ago | parent | prev [-] | |||||||
Humans had 500 million * 8670 hours of Pre-Training. I don't get why Carmack would say things should be learned in hours or upper bounds it to human lifetime. | ||||||||
|