Remix.run Logo
andy12_ 5 hours ago

I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation. World models don't solve any of these problems; they are fundamentally the same kind of deep learning architectures we are used to work with. Heck, if you think learning from the world itself is the bottleneck, you can just put a vision-action LLM on a reinforcement learning loop in a robotic/simulated body.

zelphirkalt 5 hours ago | parent | next [-]

> I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.

Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.

Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.

The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.

andy12_ 4 hours ago | parent | next [-]

> Even with continuous backpropagation and "learning"

That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way.

https://www.sciencedirect.com/science/article/pii/S089662732...

torginus 2 hours ago | parent [-]

Forgive me for being ignorant - but 'loss' in supervised learning ML context encode the difference between how unlikely (high loss) or likely (low loss) was the network in predicting the output based on the input.

This sounds very similar to me as to what neurons do (avoid unpredictable stimulation)

andy12_ an hour ago | parent [-]

So, I have been thinking about this for a little while. Image a model f that takes a world x and makes a prediciton y. At a high-level, a traditional supervised model is trained like this

f(x)=y' => loss(y',y) => how good was my prediction? Train f through backprop with that error.

While a model trained with reinforcement learning is more similar to this. Where m(y) is the resulting world state of taking an action y the model predicted.

f(x)=y' => m(y')=z => reward(z) => how good was the state I was in based on my actions? Train f with an algorithm like REINFORCE with the reward, as the world m is a non-differentiable black-box.

While a group of neurons is more like predicting what is the resulting word state of taking my action, g(x,y), and trying to learn by both tuning g and the action taken f(x).

f(x)=y' => m(y')=z => g(x,y)=z' => loss(z,z') => how predictable was the results of my actions? Train g normally with backprop, and train f with an algorithm like REINFORCE with negative surprise as a reward.

After talking with GPT5.2 for a little while, it seems like Curiosity-driven Exploration by Self-supervised Prediction[1] might be an architecture similar to the one I described for neurons? But with the twist that f is rewarded by making the prediction error bigger (not smaller!) as a proxy of "curiosity".

[1] https://arxiv.org/pdf/1705.05363

steego 26 minutes ago | parent | prev | next [-]

I think people MOSTLY foresee and anticipate events in OUR training data, which mostly comprises information collected by our senses.

Our training data is a lot more diverse than an LLMs. We also leverage our senses as a carrier for communicating abstract ideas using audio and visual channels that may or may not be grounded in reality. We have TV shows, video games, programming languages and all sorts of rich and interesting things we can engage with that do not reflect our fundamental reality.

Like LLMs, we can hallucinate while we sleep or we can delude ourselves with untethered ideas, but UNLIKE LLMs, we can steer our own learning corpus. We can train ourselves with our own untethered “hallucinations” or we can render them in art and share them with others so they can include it in their training corpus.

Our hallucinations are often just erroneous models of the world. When we render it into something that has aesthetic appeal, we might call it art.

If the hallucination helps us understand some aspect of something, we call it a conjecture or hypothesis.

We live in a rich world filled with rich training data. We don’t magically anticipate events not in our training data, but we’re also not void of creativity (“hallucinations”) either.

Most of us are stochastic parrots most of the time. We’ve only gotten this far because there are so many of us and we’ve been on this earth for many generations.

Most of us are dazzled and instinctively driven to mimic the ideas that a small minority of people “hallucinate”.

There is no shame in mimicking or being a stochastic parrot. These are critical features that helped our ancestors survive.

jstummbillig 3 hours ago | parent | prev | next [-]

> They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.

Can you be a bit more specific at all bounds? Maybe via an example?

wiz21c 4 hours ago | parent | prev [-]

I'm sure that if a car appeared from nowhere in the middle of your living room, you would not be prepared at all.

So my question is: when is there enough training data that you can handle 99.99% of the world ?

ben_w 4 hours ago | parent | prev | next [-]

> Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.

While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.

* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.

10xDev 4 hours ago | parent | prev | next [-]

The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it.

jnd-cz 2 hours ago | parent | next [-]

Unless you use your oen local models then you don't even know when OpenAI or Anthropic tweaked the model less or more. One week it's a version x, next week it's a version y. Just like your operating system is continuously evolving with smaller patches of specific apps to whole new kernel version and new OS release.

10xDev 6 minutes ago | parent [-]

There is still a huge gap between a model continuously updating itself and weekly patches by a specialist team. The former would make things unpredictable.

kergonath 4 hours ago | parent | prev [-]

> The fact that models aren't continually updating seems more like a feature.

I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.

10xDev 3 minutes ago | parent [-]

Stochastic and being unpredictable are two different things. I would claim LLMs are generally predictable even if it is not as predictable as a deterministic program.

A_D_E_P_T 5 hours ago | parent | prev | next [-]

You could have continual learning on text and still be stuck in the same "remixing baseline human communications" trap. It's a nasty one, very hard to avoid, possibly even structurally unavoidable.

As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.

energy123 5 hours ago | parent | prev [-]

I don't understand why online learning is that necessary. If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. A hippocampus is a nice upgrade to that, but not super obviously on the critical path.

staticman2 3 hours ago | parent | next [-]

> If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI.

I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this.

It also calls this Einstein an AGI rather than a disabled human???

daxfohl 13 minutes ago | parent | prev | next [-]

He basically said that himself:

"Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking".

-- Albert Einstein

zelphirkalt 4 hours ago | parent | prev | next [-]

I guess the sheer amount and also variety of information you would need to pre-encode to get an Einstein at 40 is huge. Every day stream of high resolution video feed and actions and consequences and thoughts and ideas he has had until the age of 40 of every single moment. That includes social interactions, like a conversation and mimic of the other person in combination with what was said and background knowledge about the other person. Even a single conversation's data is a huge amount of data.

But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective.

a-french-anon 2 hours ago | parent | prev | next [-]

Kinda a moot point in my eyes because I very much doubt you can arrive at the same result without the same learning process.

jeltz 3 hours ago | parent | prev | next [-]

It could possibly be useful but I don't see why it would be AGI.

andy12_ 4 hours ago | parent | prev | next [-]

That's true. Though could that hippocampus-less Einstein be able to keep making novel complex discoveries from that point forward? Seems difficult. He would rapidly reach the limits of his short term memory (the same way current models rapidly reach the limits of their context windows).

andsoitis 4 hours ago | parent | prev [-]

Where does that training data come from?