Remix.run Logo
coldtea 4 days ago

>They’re basically Markov chains on steroids. There is no intelligence in this, and in my opinion actual intelligence is a prerequisite for AGI.

This argument is circular.

A better argument should address (given the LLM successes in many types of reasoning, passing the turing test, and thus at producing results that previously required intelligence) why human intelligence might not also just be "Markov chains on even better steroids".

IgorPartola 4 days ago | parent [-]

Humans think even when not being prompted by other humans, and in some cases can learn new things by having intuition make a concept clear or by performing thought experiments or by combining memories of old facts and new facts across disciplines. Humans also have various kinds of reasoning (deductive, inductive, etc.). Humans also can have motivations.

I don’t know if AGI needs to have all human traits but I think a Markov chain that sits dormant and does not possess curiosity about itself and the world around itself does not seem like AGI.

coldtea 4 days ago | parent | next [-]

>Humans think even when not being prompted by other humans

That's more of an implementation detail. Humans take constant sensory input and have some sort of way to re-introduce input later (e.g. remember something).

Both could be added (even trivially) to LLMs.

And it's not at all clear human thought is contant. It just appears so in our naive intuition (same how we see a movie as moving, not as 24 static frames per second). It's a discontinuous mechanism though (propagation time, etc), and this has been shown (e.g. EEG/MEG show the brain sample sensory input in a periodic pattern, stimuly with small time difference are lost - as if there is a blind-window regarding perception, etc).

>and in some cases can learn new things by having intuition make a concept clear or by performing thought experiments or by combining memories of old facts and new facts across disciplines

Unless we define intuition in a way that excludes LLM style mechanisms a priori, whose to say LLMs don't do all those things as well, even if in a simpler way?

They've been shown to combine stuff across disciplines, and also to develop concepts not directly on their training set.

And "performing thought experiments" is not that different than the reasoning steps and backtracking LLMs also already do.

Not saying LLMs are on parity with human thinking/consciousness. Just that it's not clear that they're doing more or less the same even at reduced capacity and with a different architecture and runtime setup.

throwaway-0001 4 days ago | parent | prev [-]

The environment is constantly prompting you. That ad you see of Coca Cola is prompting you to do something. That hunger feeling is prompting “you” to find food. That memory that makes you miss someone is another prompt to find that someone - or to avoid.

Sometimes the prompt is outside your body other times is inside.