Remix.run Logo
slopinthebag 6 hours ago

> I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

Uh, because up until and including now, we are...?

virgildotcodes 6 hours ago | parent [-]

Every living thing on Earth is unique. Every rock is unique in virtually infinite ways from the next otherwise identical rock.

There are also a tremendous number of similarities between all living things and between rocks (and between rocks and living things).

Most ways in which things are unique are arguably uninteresting.

The default mode, the null hypothesis should be to assume that human intelligence isn't interestingly unique unless it can be proven otherwise.

In these repeated discussions around AI, there is criticism over the way an AI solves a problem, without any actual critical thought about the way humans solve problems.

The latter is left up to the assumption that "of course humans do X differently" and if you press you invariably end up at something couched in a vague mysticism about our inner-workings.

Humans apparently create something from nothing, without the recombination of any prior knowledge or outside information, and they get it right on the first try. Through what, divine inspiration from the God who made us and only us in His image?

gf000 3 hours ago | parent | next [-]

Humans are obviously unique in an interesting way. People only "move the goalpost" because it's not an interesting question that humans can do some great stuff, the interesting question is where the boundary is. (Whether against animals or AI).

Some example goals which makes human trivially superior (in terms of intelligence): invention of nuclear bomb/plants, theory of relativity, etc.

stavros 2 hours ago | parent [-]

But that's unique in the sense of "you have a bag of ten apples and I have a bag of eleven apples, therefore my bag is unique". It's not qualitatively different intelligence than a dog's, you just have more of it.

gf000 an hour ago | parent [-]

I would argue that point. The biological components are the same, but emergent behavior is a thing. So both the scale and the number of connections/way they connect have surpassed some limit after which cognitive capabilities increased severalfold to the point that humans "took over the world".

And arguably further increase in intelligence seems to fall into a diminishing returns category, compared to this previous boom. (Someone being "2x smarter" doesn't give them enough benefit of reigning over others, at least history would look otherwise were it the case, in my opinion)

Probably dumb example, but just by increasing speed you get well-behaving laminar flow vs turbulence, yet it's fundamentally the same a level beneath.

stavros an hour ago | parent [-]

Yeah, I don't know that there's such a jump. Dogs, for example, clearly communicate, both with us and with each other. They don't have language, but they also don't lack communication skills. To me, language is just "better communication" rather than a qualitatively different thing.

slopinthebag 6 hours ago | parent | prev | next [-]

I doubt you can even define intelligence sufficiently to argue this point. Since that's an ongoing debate without a resolution thus far.

But you claimed that humans aren't unique. I think it's pretty obvious we are on many dimensions including what you might classify as "intelligence". You don't even necessarily have to believe in a "soul" or something like that, although many people do. The capabilities of a human far surpass every single AI to date, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this.

> There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical.

Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

virgildotcodes 2 hours ago | parent | next [-]

> I doubt you can even define intelligence sufficiently to argue this point.

Agreed.

> But you claimed that humans aren't unique.

I'm arguing that it is up to us to prove that they are interestingly unique in the context of this post. Which is pretty narrow - how do we solve problems?

The theme I was arguing against that I've seen repeated throughout this thread is that AIs are just recombining things they've absorbed and throwing those recombinations at the wall until they see what sticks.

It raises the question of why we presume that humans do things any differently, when it seems quite clear that we can only ever possibly do the same, unless we are claiming that knowledge of the universe can enter the human mind through some means other than through the known senses.

Not at all disputing that humans possess many capabilities that AIs do not.

> Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

I touched on this elsewhere, will go ahead and paste it here again:

The fundamental thing I'm speaking out against is the arrogance of human exceptionalism.

This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over.

Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions...

I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position.

famouswaffles 5 hours ago | parent | prev | next [-]

>The capabilities of a human far surpass every single AI to date

What does this mean ? Are you saying every human could have achieved this result ? Or this ? https://openai.com/index/new-result-theoretical-physics/

because well, you'd be wrong.

>, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this.

Human intelligence was brute forced. Please let's all stop pretending like those billions of years of evolution don't count and we poofed into existence. And you can keep parroting 'simulacrum of intelligence' all you want but that isn't going to make it any more true.

slopinthebag 5 hours ago | parent [-]

> The capabilities of a human far surpass every single AI to date

Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable. Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence. Or else we'd be talking about how my calculator is intelligent. Of course computers can compute faster than we can, that's aside the point.

> Human intelligence was brute forced.

No, I don't mean how the intelligence evolved or was created. But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional. Evolution is not an intentional process, unless you believe in God or a creator of sorts, which is totally fair but probably not what you were intending.

But my point is that LLM's essentially arrive at answers by brute force through search. Go look at what a reasoning model does to count the letters in a sentence, or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!).

famouswaffles 4 hours ago | parent [-]

>Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable.

Really ? Every Human ? Are you sure ? because I certainly wouldn't ask just any human for the things I use these models for, and I use them for a lot of things. So, to me the idea that all humans are 'overwhelmingly more capable' is blatantly false.

>Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence.

What was achieved here or in the link I sent is not just "solving a math equation".

>Or else we'd be talking about how my calculator is intelligent.

If you said that humans are overwhelmingly more capable than calculators in arithmetic, well I'd tell you you were talking nonsense.

>Of course computers can compute faster than we can, that's aside the point.

I never said anything about speed. You are not making any significant point here lol

>No, I don't mean how the intelligence evolved or was created.

Well then what are you saying ? Because the only brute-forced aspect of LLM intelligence is its creation. If you do not mean that then just drop the point.

>But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional.

First of all, this makes no sense sorry. Evolution is regularly described as a brute force process by atheist and religious scientists alike.

Second, I don't have any problem with people thinking we have a creator, although that instance still does necessarily mean a magic 'poof into existence' reality either.

>But my point is that LLM's essentially arrive at answers by brute force through search.

Sorry but that's just not remotely true. This is so untrue I honestly don't know what to tell you. This very post, with the transcript available is an example of how untrue it is.

>or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!).

Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

gf000 3 hours ago | parent | next [-]

> Really ? Every Human ?

Yes, in many ways absolutely. Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases.

> Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

Isn't that just more proof how efficient the human brain is? Especially that a wire has much better properties than water solutions in bags.

famouswaffles 2 hours ago | parent [-]

>Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases.

People use LLMs for a lot of things. 'Better Google' is is a tiny slice of that.

>Isn't that just more proof how efficient the human brain is?

Sure. So what ? If a game runs poorly on one hardware and excellently on another, does that mean the game was fundamentally different between the 2 devices ? No, Of course not.

slopinthebag 4 hours ago | parent | prev [-]

I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us.

Here might be some definitions of intelligence for example:

> The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.

> "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills".

> Goal-directed adaptive behavior.

> a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation

But even a housefly possesses levels of intelligence regarding flight and spacial awareness that dominates any LLM. Would it be fair to say a fly is more intelligent than an LLM? It certainly is along a narrow set of axis.

> Because the only brute-forced aspect of LLM intelligence is its creation.

I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. Not quite an exhaustive search, but massively compressed experience + pattern matching.

But regardless, even if both forms of intelligence arrived via some form of brute force, what is more important to me is the result of that - how does the process of employing our intelligence look.

> This very post, with the transcript available is an example of how untrue it is.

The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.

> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

You're so close to getting it lol

famouswaffles 3 hours ago | parent [-]

>I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us.

So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains ? That's not what overwhelming means.

>I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force.

That is not really what “brute force” means. Pattern learning over a compressed representation of experience is not the same thing as exhaustive search. Calling any statistical method “brute force” just makes the term too vague to be useful.

> what is more important to me is the result of that - how does the process of employing our intelligence look.

But this is exactly where you are smuggling in assumptions. We do not actually understand the internal workings of either the human brain or frontier LLMs at the level needed to make confident claims like this. So a lot of what you are calling “the result” is really just your intuition about what intelligence is supposed to look like.

And I do not think that distinction is as meaningful as you want it to be anyway. Flight is flight. Birds fly and planes fly. A plane is not a “simulacrum of flight” just because it achieves the same end by a different mechanism.

>The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.

You do not need access to every internal representation to see that the model did not arrive at the answer by brute-forcing all possibilities. The observed behavior is already enough to rule that out.

> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.

>You're so close to getting it lol.

No you don't understand what I'm saying. If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs, never mind humans. Does that mean how the brain works is wrong ? No it means we are dealing with 2 entirely different substrates and directly comparing efficiencies like that to show one is superior is silly.

slopinthebag 3 hours ago | parent [-]

> So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains

When the amount of domains in which humans are more capable than LLM's vastly exceeds the amount of domains in which LLM's are more capable than humans, yes.

I also agree that we don't have a great understanding of either human or LLM intelligence, but we can at least observe major differences and conclude that there are, in fact, major differences. In the same way we can conclude that both birds and planes have major differences, and saying that "there's nothing unique about birds, look at planes" is just a really weird thing to say.

> If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs

Do you think perhaps this massive difference points to there being a significant and foundational structural and functional difference between these types of intelligences?

famouswaffles 3 hours ago | parent [-]

[dead]

blackcatsec 5 hours ago | parent | prev [-]

> I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

I think it comes from a position of arrogance/ego. I'll speak for the US here, since that's what I know the most; but the average 'techie' in general skews towards the higher intelligence numbers than the lower parts. This is a very, very broad stroke, and that's intentional to illustrate my point. Because of this, techie culture gains quite a bit of arrogance around it with regards to the masses. And this has been trained into tech culture since childhood. Whether it be adults praising us for being "so smart", or that we "figured out the VCR", or some other random tech problem that literally almost any human being can solve by simply reading the manual.

What I've found, in the vast majority of technical problem solving cases that average people have challenges with, if they just took a few minutes to read a manual they'd be able to solve a lot of it themselves. In short, I don't believe as a very strong techie that I'm "smarter than most", but rather that I've taken the time to dive into a subject area that most other humans do not feel the need nor desire to do so.

There are objectively hard problems in tech to solve, but the amount of people solving THOSE problems in the tech industry are few and far in between. And so the tech industry as a whole has spent the last decade or two spinning circles on increasingly complex systems to continue feeding their own egos about their own intelligence. We're now at a point that rather than solving the puzzle, most techies are creating incrementally complex puzzles to solve because they're bored of the puzzles that are in front of them. "Let me solve that puzzle by making a puzzle solver." "Okay, now let me make a puzzle solver creation tool to create puzzle solvers to solve the puzzle." and so forth and so forth. At the end of the day, you're still just solving a puzzle...

But it's this arrogance that really bothers me in the tech bro culture world. And, more importantly, at least in some tech bro circles, they have realized that their target to gathering an exponential increase in wealth doesn't lie in creating new and novel ways to solve the same puzzles, but to try and tout AI as the greatest puzzle solver creation tool puzzle solver known to man (and let me grift off of it for a little bit).

virgildotcodes 2 hours ago | parent | next [-]

It's funny because the fundamental thing I'm speaking out against is the arrogance of human exceptionalism.

This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over.

Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions...

I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position.

slopinthebag 4 hours ago | parent | prev [-]

I largely agree with you, but I also see this same type of thinking appear in people who I know are not arrogant - at least in the techbroisk way.

gormen 5 hours ago | parent | prev [-]

[dead]