Remix.run Logo
famouswaffles 5 hours ago

>I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us.

So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains ? That's not what overwhelming means.

>I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force.

That is not really what “brute force” means. Pattern learning over a compressed representation of experience is not the same thing as exhaustive search. Calling any statistical method “brute force” just makes the term too vague to be useful.

> what is more important to me is the result of that - how does the process of employing our intelligence look.

But this is exactly where you are smuggling in assumptions. We do not actually understand the internal workings of either the human brain or frontier LLMs at the level needed to make confident claims like this. So a lot of what you are calling “the result” is really just your intuition about what intelligence is supposed to look like.

And I do not think that distinction is as meaningful as you want it to be anyway. Flight is flight. Birds fly and planes fly. A plane is not a “simulacrum of flight” just because it achieves the same end by a different mechanism.

>The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.

You do not need access to every internal representation to see that the model did not arrive at the answer by brute-forcing all possibilities. The observed behavior is already enough to rule that out.

> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

>You're so close to getting it lol.

No you don't understand what I'm saying. If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs, never mind humans. Does that mean how the brain works is wrong ? No it means we are dealing with 2 entirely different substrates and directly comparing efficiencies like that to show one is superior is silly.

slopinthebag 5 hours ago | parent [-]

> So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains

When the amount of domains in which humans are more capable than LLM's vastly exceeds the amount of domains in which LLM's are more capable than humans, yes.

I also agree that we don't have a great understanding of either human or LLM intelligence, but we can at least observe major differences and conclude that there are, in fact, major differences. In the same way we can conclude that both birds and planes have major differences, and saying that "there's nothing unique about birds, look at planes" is just a really weird thing to say.

> If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs

Do you think perhaps this massive difference points to there being a significant and foundational structural and functional difference between these types of intelligences?

famouswaffles 4 hours ago | parent [-]

[dead]