| ▲ | A_D_E_P_T 5 hours ago |
| Justifiable. There are a lot more degrees of freedom in world models. LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions. A well-funded and well-run startup building physical world models (grounded in spatiotemporal understanding, not just language patterns) would be attacking what I see as the actual bottleneck to AGI. Even if they succeed only partially, they may unlock the kind of generalization and creative spark that current LLMs structurally can't reach. |
|
| ▲ | jnd-cz an hour ago | parent | next [-] |
| The sum of human knowledge is more than enough to come up with innovative ideas and not every field is working directly with the physical world. Still I would say there's enough information in the written history to create virtual simulation of 3d world with all ohysical laws applying (to a certain degree because computation is limited). What current LLMs lack is inner motivation to create something on their own without being prompted. To think in their free time (whatever that means for batch, on demand processing), to reflect and learn, eventually to self modify. I have a simple brain, limited knowledge, limited attention span, limited context memory. Yet I create stuff based what I see, read online. Nothing special, sometimes more based on someone else's project, sometimes on my own ideas which I have no doubt aren't that unique among 8 billions of other people. Yet consulting with AI provides me with more ideas applicable to my current vision of what I want to achieve. Sure it's mostly based on generally known (not always known to me) good practices. But my thoughts are the same way, only more limited by what I have slowly learned so far in my life. |
| |
| ▲ | daxfohl 19 minutes ago | parent [-] | | I guess you need two things to make that happen. First, more specialization among models and an ability to evolve, else you get all instances thinking roughly the same thing, or deer in the headlights where they don't know what of the millions of options they should think about. Second, fewer guardrails; there's only so much you can do by pure thought. The problem is, idk if we're ready to have millions of distinct, evolving, self-executing models running wild without guardrails. It seems like a contradiction: you can't achieve true cognition from a machine while artificially restricting its boundaries, and you can't lift the boundaries without impacting safety. |
|
|
| ▲ | andy12_ 5 hours ago | parent | prev | next [-] |
| I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. World models don't solve any of these problems; they are fundamentally the same kind of deep learning architectures we are used to work with. Heck, if you think learning from the world itself is the bottleneck, you can just put a vision-action LLM on a reinforcement learning loop in a robotic/simulated body. |
| |
| ▲ | zelphirkalt 5 hours ago | parent | next [-] | | > I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way. Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not. The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data. | | |
| ▲ | andy12_ 4 hours ago | parent | next [-] | | > Even with continuous backpropagation and "learning" That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way. https://www.sciencedirect.com/science/article/pii/S089662732... | | |
| ▲ | torginus 2 hours ago | parent [-] | | Forgive me for being ignorant - but 'loss' in supervised learning ML context encode the difference between how unlikely (high loss) or likely (low loss) was the network in predicting the output based on the input. This sounds very similar to me as to what neurons do (avoid unpredictable stimulation) | | |
| ▲ | andy12_ an hour ago | parent [-] | | So, I have been thinking about this for a little while. Image a model f that takes a world x and makes a prediciton y. At a high-level, a traditional supervised model is trained like this f(x)=y' => loss(y',y) => how good was my prediction? Train f through backprop with that error. While a model trained with reinforcement learning is more similar to this. Where m(y) is the resulting world state of taking an action y the model predicted. f(x)=y' => m(y')=z => reward(z) => how good was the state I was in based on my actions? Train f with an algorithm like REINFORCE with the reward, as the world m is a non-differentiable black-box. While a group of neurons is more like predicting what is the resulting word state of taking my action, g(x,y), and trying to learn by both tuning g and the action taken f(x). f(x)=y' => m(y')=z => g(x,y)=z' => loss(z,z') => how predictable was the results of my actions? Train g normally with backprop, and train f with an algorithm like REINFORCE with negative surprise as a reward. After talking with GPT5.2 for a little while, it seems like Curiosity-driven Exploration by Self-supervised Prediction[1] might be an architecture similar to the one I described for neurons? But with the twist that f is rewarded by making the prediction error bigger (not smaller!) as a proxy of "curiosity". [1] https://arxiv.org/pdf/1705.05363 |
|
| |
| ▲ | steego 26 minutes ago | parent | prev | next [-] | | I think people MOSTLY foresee and anticipate events in OUR training data, which mostly comprises information collected by our senses. Our training data is a lot more diverse than an LLMs. We also leverage our senses as a carrier for communicating abstract ideas using audio and visual channels that may or may not be grounded in reality. We have TV shows, video games, programming languages and all sorts of rich and interesting things we can engage with that do not reflect our fundamental reality. Like LLMs, we can hallucinate while we sleep or we can delude ourselves with untethered ideas, but UNLIKE LLMs, we can steer our own learning corpus. We can train ourselves with our own untethered “hallucinations” or we can render them in art and share them with others so they can include it in their training corpus. Our hallucinations are often just erroneous models of the world. When we render it into something that has aesthetic appeal, we might call it art. If the hallucination helps us understand some aspect of something, we call it a conjecture or hypothesis. We live in a rich world filled with rich training data. We don’t magically anticipate events not in our training data, but we’re also not void of creativity (“hallucinations”) either. Most of us are stochastic parrots most of the time. We’ve only gotten this far because there are so many of us and we’ve been on this earth for many generations. Most of us are dazzled and instinctively driven to mimic the ideas that a small minority of people “hallucinate”. There is no shame in mimicking or being a stochastic parrot. These are critical features that helped our ancestors survive. | |
| ▲ | jstummbillig 3 hours ago | parent | prev | next [-] | | > They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way. Can you be a bit more specific at all bounds? Maybe via an example? | |
| ▲ | wiz21c 4 hours ago | parent | prev [-] | | I'm sure that if a car appeared from nowhere in the middle of your living room, you would not be prepared at all. So my question is: when is there enough training data that you can handle 99.99% of the world ? |
| |
| ▲ | ben_w 4 hours ago | parent | prev | next [-] | | > Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else. * I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence. | |
| ▲ | 10xDev 4 hours ago | parent | prev | next [-] | | The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it. | | |
| ▲ | jnd-cz 2 hours ago | parent | next [-] | | Unless you use your oen local models then you don't even know when OpenAI or Anthropic tweaked the model less or more. One week it's a version x, next week it's a version y. Just like your operating system is continuously evolving with smaller patches of specific apps to whole new kernel version and new OS release. | | |
| ▲ | 10xDev 7 minutes ago | parent [-] | | There is still a huge gap between a model continuously updating itself and weekly patches by a specialist team. The former would make things unpredictable. |
| |
| ▲ | kergonath 4 hours ago | parent | prev [-] | | > The fact that models aren't continually updating seems more like a feature. I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well. | | |
| ▲ | 10xDev 4 minutes ago | parent [-] | | Stochastic and being unpredictable are two different things. I would claim LLMs are generally predictable even if it is not as predictable as a deterministic program. |
|
| |
| ▲ | A_D_E_P_T 5 hours ago | parent | prev | next [-] | | You could have continual learning on text and still be stuck in the same "remixing baseline human communications" trap. It's a nasty one, very hard to avoid, possibly even structurally unavoidable. As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in. | |
| ▲ | energy123 5 hours ago | parent | prev [-] | | I don't understand why online learning is that necessary. If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. A hippocampus is a nice upgrade to that, but not super obviously on the critical path. | | |
| ▲ | staticman2 3 hours ago | parent | next [-] | | > If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this. It also calls this Einstein an AGI rather than a disabled human??? | |
| ▲ | daxfohl 14 minutes ago | parent | prev | next [-] | | He basically said that himself: "Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking". -- Albert Einstein | |
| ▲ | zelphirkalt 4 hours ago | parent | prev | next [-] | | I guess the sheer amount and also variety of information you would need to pre-encode to get an Einstein at 40 is huge. Every day stream of high resolution video feed and actions and consequences and thoughts and ideas he has had until the age of 40 of every single moment. That includes social interactions, like a conversation and mimic of the other person in combination with what was said and background knowledge about the other person. Even a single conversation's data is a huge amount of data. But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective. | |
| ▲ | a-french-anon 2 hours ago | parent | prev | next [-] | | Kinda a moot point in my eyes because I very much doubt you can arrive at the same result without the same learning process. | |
| ▲ | jeltz 3 hours ago | parent | prev | next [-] | | It could possibly be useful but I don't see why it would be AGI. | |
| ▲ | andy12_ 4 hours ago | parent | prev | next [-] | | That's true. Though could that hippocampus-less Einstein be able to keep making novel complex discoveries from that point forward? Seems difficult. He would rapidly reach the limits of his short term memory (the same way current models rapidly reach the limits of their context windows). | |
| ▲ | andsoitis 4 hours ago | parent | prev [-] | | Where does that training data come from? |
|
|
|
| ▲ | robrenaud 22 minutes ago | parent | prev | next [-] |
| Was Alphago's move 37 original? In the last step of training LLMs, reinforcement learning from verified rewards, LLMs are trained to maximize the probability of solving problems using their own output, depending on a reward signal akin to winning in Go. It's not just imitating human written text. Fwiw, I agree that world models and some kind of learning from interacting with physical reality, rather than massive amounts of digitized gym environments is likely necessary for a breakthrough for AGI. |
|
| ▲ | Unearned5161 4 hours ago | parent | prev | next [-] |
| I have a pet peeve with the concept of "a genuinely novel discovery or invention", what do you imagine this to be? Can you point me towards a discovery or invention that was "genuinely novel", ever? I don't think it makes sense conceptually unless you're literally referring to discovering new physical things like elements or something. Humans are remixers of ideas. That's all we do all the time. Our thoughts and actions are dictated by our environment and memories; everything must necessarily be built up from pre-existing parts. |
| |
| ▲ | bonesss 2 hours ago | parent | next [-] | | Genuinely novel discovery or invention? Einstein’s theory of relativity springs to mind, which is deeply counter-intuitive and relies on the interaction of forces unknowable to our basic Newtonian senses. There’s an argument that it’s all turtles (someone told him about universes, he read about gravity, etc), but there are novel maths and novel types of math that arise around and for such theories which would indicate an objective positive expansion of understanding and concept volume. | |
| ▲ | davidfarrell 4 hours ago | parent | prev | next [-] | | W Brian Arthur's book "The Nature of Technology" provides a framework for classifying new technology as elemental vs innovative that I find helpful. For example the Huntley-Mcllroy diff operates on the phenomenon that ordered correspondence survives editing. That was an invention (discovery of a natural phenomenon and a means to harness it). Myers diff improves the performance by exploiting the fact that text changes are sparse. That's innovation. A python app using libdiff, that's engineering.
And then you might say in terms of "descendants": invention > innovation > engineering. But it's just a perspective. | |
| ▲ | A_D_E_P_T 4 hours ago | parent | prev | next [-] | | Suno is transformer-based; in a way it's a heavily modified LLM. You can't get Suno to do anything that's not in its training data. It is physically incapable of inventing a new musical genre. No matter how detailed the instructions you give it, and even if you cheat and provide it with actual MP3 examples of what you want it to create, it is impossible. The same goes for LLMs and invention generally, which is why they've made no important scientific discoveries. You can learn a lot by playing with Suno. | |
| ▲ | 0x3f 4 hours ago | parent | prev [-] | | Novel things can be incremental. I don't think LLMs can do that either, at least I've never seen one do it. |
|
|
| ▲ | 10xDev 5 hours ago | parent | prev | next [-] |
| Whether it is text or an image, it is just bits for a computer. A token can represent anything. |
| |
| ▲ | A_D_E_P_T 5 hours ago | parent [-] | | Sure, but don't conflate the representation format with the structure of what's being represented. Everything is bits to a computer, but text training data captures the flattened, after-the-fact residue of baseline human thought: Someone's written description of how something works. (At best!) A world model would need to capture the underlying causal, spatial, and temporal structure of reality itself -- the thing itself, that which generates those descriptions. You can tokenize an image just as easily as a sentence, sure, but a pile of images and text won't give you a relation between the system and the world. A world model, in theory, can. I mean, we ought to be sufficient proof of this, in a sense... | | |
| ▲ | firecall 4 hours ago | parent [-] | | It’s worth noting how our human relationship or understanding of our world model changed as our tools to inspect and describe our world advanced. So when we think about capturing any underlying structure of reality itself, we are constrained by the tools at hand. The capability of the tool forms the description which grants the level of understanding. |
|
|
|
| ▲ | ml-anon 20 minutes ago | parent | prev | next [-] |
| Honestly, how do people who know so little have this much confidence to post here? |
|
| ▲ | whiplash451 3 hours ago | parent | prev | next [-] |
| The term LLM is confusing your point because VLMs belong to the same bin according to Yann. Using the term autoregressive models instead might help. |
|
| ▲ | energy123 5 hours ago | parent | prev | next [-] |
| why LLMs (transformers trained on multimodal token sequences, potentially containing spatiotemporal information) can't be a world model? |
| |
| ▲ | ForHackernews 4 hours ago | parent [-] | | https://medium.com/state-of-the-art-technology/world-models-... > One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.” | | |
| ▲ | energy123 4 hours ago | parent [-] | | But they don't only operate on language? They operate on token sequences, which can be images, coordinates, time, language, etc. | | |
| ▲ | kergonath 4 hours ago | parent | next [-] | | It’s an interesting observation, but I think you have it backwards. The examples you give are all using discrete symbols to represent something real and communicating this description to other entities. I would argue that all your examples are languages. | |
| ▲ | samrus 3 hours ago | parent | prev [-] | | Whats the first L stand for? Thats not just vestogial, their model of the world is formed almost exclusively from language rather than a range of things contributing significantly like for humans. The biggest thing thats missing is actual feedback to their decisions. They have no "idea of that because transformers and embeddings dont model that yet. And langiage descriptions and image representations of feedback arent enough. They are too disjointed. It needs more |
|
|
|
|
| ▲ | bsenftner 5 hours ago | parent | prev | next [-] |
| There will be no "unlocking of AGI" until we develop a new science capable of artificial comprehension. Comprehension is the cornucopia that produces everything we are, given raw stimulus an entire communicating Universe is generated with a plethora of highly advanceds predator/prey characters in an infinitely complex dynamic, and human science and technology have no lead how to artificially make sense of that in a simultaneous unifying whole. That's comprehension. |
| |
|
| ▲ | rvz 5 hours ago | parent | prev [-] |
| A lot more justifiable than say, Thinking Machines at least. But we will "see". World models and vision seems like a great use case for robotics which I can imagine that being the main driver of AMI. |