Remix.run Logo
slopinthebag 5 hours ago

I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us.

Here might be some definitions of intelligence for example:

> The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.

> "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills".

> Goal-directed adaptive behavior.

> a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation

But even a housefly possesses levels of intelligence regarding flight and spacial awareness that dominates any LLM. Would it be fair to say a fly is more intelligent than an LLM? It certainly is along a narrow set of axis.

> Because the only brute-forced aspect of LLM intelligence is its creation.

I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. Not quite an exhaustive search, but massively compressed experience + pattern matching.

But regardless, even if both forms of intelligence arrived via some form of brute force, what is more important to me is the result of that - how does the process of employing our intelligence look.

> This very post, with the transcript available is an example of how untrue it is.

The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.

> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

You're so close to getting it lol

famouswaffles 5 hours ago | parent [-]

>I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us.

So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains ? That's not what overwhelming means.

>I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force.

That is not really what “brute force” means. Pattern learning over a compressed representation of experience is not the same thing as exhaustive search. Calling any statistical method “brute force” just makes the term too vague to be useful.

> what is more important to me is the result of that - how does the process of employing our intelligence look.

But this is exactly where you are smuggling in assumptions. We do not actually understand the internal workings of either the human brain or frontier LLMs at the level needed to make confident claims like this. So a lot of what you are calling “the result” is really just your intuition about what intelligence is supposed to look like.

And I do not think that distinction is as meaningful as you want it to be anyway. Flight is flight. Birds fly and planes fly. A plane is not a “simulacrum of flight” just because it achieves the same end by a different mechanism.

>The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.

You do not need access to every internal representation to see that the model did not arrive at the answer by brute-forcing all possibilities. The observed behavior is already enough to rule that out.

> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

>You're so close to getting it lol.

No you don't understand what I'm saying. If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs, never mind humans. Does that mean how the brain works is wrong ? No it means we are dealing with 2 entirely different substrates and directly comparing efficiencies like that to show one is superior is silly.

slopinthebag 5 hours ago | parent [-]

> So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains

When the amount of domains in which humans are more capable than LLM's vastly exceeds the amount of domains in which LLM's are more capable than humans, yes.

I also agree that we don't have a great understanding of either human or LLM intelligence, but we can at least observe major differences and conclude that there are, in fact, major differences. In the same way we can conclude that both birds and planes have major differences, and saying that "there's nothing unique about birds, look at planes" is just a really weird thing to say.

> If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs

Do you think perhaps this massive difference points to there being a significant and foundational structural and functional difference between these types of intelligences?

famouswaffles 4 hours ago | parent [-]

[dead]