Remix.run Logo
sysguest a day ago

yeah that "model of the world" would mean:

babies are already born with "the model of the world"

but a lot of experiments on babies/young kids tell otherwise

ekjhgkejhgk a day ago | parent | next [-]

> yeah that "model of the world" would mean: babies are already born with "the model of the world"

No, not necessarily. Babies don't interact with the world only by reading what people wrote wikipedia and stackoverflow, like these models are trained. Babies do things to the world and observe what happens.

I imagine it's similar to the difference between a person sitting on a bicycle and trying to ride it, vs a person watching videos of people riding bicycles.

I think it would actually be a great experiment. If you take a person that never rode a bicycle in their life and feed them videos of people riding bicycles, and literature about bikes, fiction and non-fiction, at some point I'm sure they'll be able to talk about it like they have huge experience in riding bikes, but won't be able to ride one.

aerhardt a day ago | parent [-]

We’ve been thinking about reaching the singularity from one end, by making computers like humans, but too little thought has been given to approaching the problem from the other end: by making babies build their world model by reading Stack Overflow.

pavlov a day ago | parent | next [-]

The “Brave New World meets OpenAI” model where bottle-born babies listen to Stack Overflow 24 hours a day until they one day graduate to Alphas who get to spend Worldcoin on AI-generated feelies.

zelphirkalt 21 hours ago | parent | prev [-]

That's it. Now you've done it! I will have stackoverflow Q&A, as well as moderator comments and closings of questions playing 24/7 to my first not yet born child! Q&A for the knowledge and the mod comments for good behavior, of course. This will lead to singularity in no time!

godelski a day ago | parent | prev | next [-]

It's a lot more complicated than that.

You have instincts, right? Innate fears? This is definitely something passed down through genetics. The Hawk/Goose Effect isn't just limited to baby chickens. Certainly some mental encoding passes down through genetics as how much the brain controls, down to your breathing and heartbeat.

But instinct is basic. It's something humans are even able to override. It's a first order approximation. Inaccurate to do meaningfully complex things, but sufficient to keep you alive. Maybe we don't want to call the instinct a world model (it certainly is naïve) but can't be discounted either.

In human development, yeah, the lion's share of it happens post birth. Human babies don't even show typical signs of consciousness, even really till the age of 2. There's many different categories of "awareness" and these certainly grow over time. But the big thing that makes humans so intelligent is that we continue to grow and learn through our whole lifetimes. And we can pass that information along without genetics and have very advanced tools to do this.

It is a combination of nature and nurture. But do note that this happens differently in different animals. It's wonderfully complex. LLMs are quite incredible but so too are many other non-thinking machines. I don't think we should throw them out, but we never needed to make the jump to intelligence. Certainly not so quickly. I mean what did Carl Sagan say?

imtringued a day ago | parent [-]

One of the biggest mysteries of humans Vs LLMs is that LLMs need an absurd amount of data during pre training, then a little bit of data during fine tuning to make them behave more human. Meanwhile humans don't need any data at all, but have the blind spot that they can only know and learn about what they have observed. This raises two questions. What is the loss function of the supervised learning algorithm equivalent? Supposedly neurons do predictive coding. They predict what their neighbours are doing. That includes input only neurons like touch, pain, vision, sound, taste, etc. The observations never contain actions. E.g. you can look at another human, but that will never teach you how to walk because your legs are different from other people's legs.

How do humans avoid starving to death? How do they avoid leaving no children? How do they avoid eating food that will kill them?

These things require a complicated chain of actions. You need to find food, a partner and you need to spit out poison.

This means you need a reinforcement learning analogue, but what is going to be the reward function equivalent? The reward function can't be created by the brain, because it would be circular. It would be like giving yourself a high, without even needing drugs. Hence, the reward signal must remain inside the body but outside the brain, where the brain can't hack it.

The first and most important reward is to perform reproduction. If food and partners are abundant, the ones that don't reproduce simply die out. This means that reward functions that don't reward reproduction disappear.

Reproduction is costly in terms of energy. Do it too many times and you need to recover and eat. Hunger evolved as a result of the brain needing to know about the energy state of the body. It overrides reproductive instincts.

Now let's say you have a poisonous plant that gives you diarrhea, but you are hungry. What stops you from eating it? Pain evolves as a response to a damaged body. Harmful activities signal themselves in the form of pain to the brain. Pain overrides hunger. However, what if the plant is so deadly that it will kill you? The pain sensors wouldn't be fast enough. You need to sense the poison before it enters your body. So the tongue evolves taste and cyanide starts tasting bitter.

Notice something? The feelings only exist internally inside the human body, but they are all coupled with continued survival in one way or another. There is no such thing for robots or LLMs. They won't accidentally evolve a complex reward function like that.

godelski a day ago | parent [-]

  > Meanwhile humans don't need any data at all
I don't agree with this and I don't think any biologist or neuroscientist would either.

1) Certainly the data I discussed exists. No creature comes out a blank slate. I'll be bold enough to say that this is true even for viruses, even if we don't consider them alive. Automata doesn't mean void of data and I'm not sure why you'd ascribe this to life or humans.

2) humans are processing data from birth (technically before too but that's not necessary for this conversation and I think we all know that's a great way to have an argument and not address our current conversation). This is clearly some active/online/continual/ reinforcement/wherever-word-you-want-to-use learning.

It's weird to suggest an either or situation. All evidence points to "both". Looking at different animals even see both but also with different distributions.

I think it's easy to over simplify the problem and the average conversation tends to do this. It's clearly a complex with many variables at play. We can't approximate with any reasonable accuracy by ignoring or holding them constant. They're coupled.

  > The reward function can't be created by the brain, because it would be circular.
Why not? I'm absolutely certain I can create my own objectives and own metrics. I'm certain my definition of success is different from yours.

  > It would be like giving yourself a high, without even needing drugs
Which is entirely possible. Maybe it takes extreme training to do extreme versions but it's also not like chemicals like dopamine are constant. You definitely get a rush by completing goals. People become addicted to things like videogames, high risk activities like sky diving, or even arguing on the internet.

Just because there are externally driven or influenced goals doesn't mean internal ones can't exist. Our emotions can be driven both externally and internally.

  > Notice something?
You're using too simple of a model. If you use this model then the solution is as easy as giving a robot self preservation (even if we need to wait a few million years). But how would self preservation evolve beyond its initial construction without the ability to metaprocess and refine that goal? So I think this should highlight a major limitation in your belief. As I see it, the only other way is a changing environment that somehow allows continued survival by the constructions and precisely evolves such that the original instructions continue to work. Even with vague instructions that's an unstable equilibrium. I think you'll find there's a million edge cases even if it seems obvious at first. Or read some Asimov ;)
ben_w a day ago | parent | prev | next [-]

> babies are already born with "the model of the world"

> but a lot of experiments on babies/young kids tell otherwise

I believe they are born with such a model? It's just that model is one where mummy still has fur for the baby to cling on to? And where aged something like 5 to 8 it's somehow useful for us to build small enclosures to hide in, leading to a display of pillow forts in the modern world?

sysguest a day ago | parent [-]

damn I guess I had to be more specific:

"LLM-level world-detail knowledge"

ben_w a day ago | parent [-]

I think I'm even more confused now about what you mean…

rwj a day ago | parent | prev [-]

Lots of experiments show that babies develop import capabilities at roughly the same times. That speaks to inherited abilities.