| ▲ | virgildotcodes 6 hours ago |
| I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique. It's this pervasive belief that underlies so much discussion around what it means to be intelligent. The null hypothesis goes out the window. People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans. If they do, they apply it in only the most restrictive way imaginable, some 2 dimensional caricature of reality, rather than considering all the ways that humans try and fail in all things throughout their lifetimes in the process of learning and discovery. There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical. |
|
| ▲ | wrqvrwvq 5 hours ago | parent | next [-] |
| It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all. An ai "checking its own work" is practically irrelevant when they all seem to go back and forth on whether you need the car at the carwash to wash the car. Undoubtedly people have been passing this set of problems to ai's for months or years and have gotten back either incorrect results or results they didn't understand, but either way, a human confirmation is required. Ai hasn't presented any novel problems, other than the multitudes of social problems described elsewhere. Ai doesn't pursue its own goals and wouldn't know whether they've "actually been achieved". This is to say nothing of the cost of this small but remarkable advance. Trillions of dollars in training and inference and so far we have a couple minor (trivial?) math solutions. I'm sure if someone had bothered funding a few phds for a year we could have found this without ai. |
| |
| ▲ | hodgehog11 4 hours ago | parent | next [-] | | Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs. Also, this has been active research for some time. Or I guess the people working on it are just not as good as a random bunch of students? It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people. I take it you're not a mathematician. This is an achievement, regardless of whether you like LLMs or not, so let's not belittle the people working on these kinds of problems please. | | |
| ▲ | famouswaffles 4 hours ago | parent [-] | | >It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people. This is the most baffling and ironic aspects of these discussions. Human exceptionalism is what drives these arguments but the machines are becoming so good you can no longer do this without putting down even the top percenter humans in the process. Same thing happening all over this thread (https://news.ycombinator.com/item?id=47006594). And it's like they don't even realize it. |
| |
| ▲ | famouswaffles 5 hours ago | parent | prev [-] | | >It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all. Replace ai with human here and that's...just how collaborative research works lol. |
|
|
| ▲ | snemvalts 6 hours ago | parent | prev | next [-] |
| The ability to learn and infer without absorbing millions of books and all text on internet really does make us special. And only at 20 watts! |
| |
| ▲ | famouswaffles 5 hours ago | parent | next [-] | | Last I checked humans didn't pop into existence doing that. It happened after billions of years of brute force, trial and error evolution. So well done for falling into the exact same trap the OP cautions. Intelligence from scratch requires a mind boggling amount of resources, and humans were no different. | | |
| ▲ | sweezyjeezy 3 hours ago | parent | next [-] | | To be fair, it is still pretty remarkable what the human brain does, especially in early years - there is no text embedded in the brain, just a crazily efficient mechanism to learn hierarchical systems. As far as I know, AI intelligence cannot do anything similar to this - it generally relies on giga-scaling, or finetuning tasks similar to those it already knows. Regardless of how this arose, or if it's relevant to AGI, this is still a uniqueness of sorts. | |
| ▲ | suddenlybananas 2 hours ago | parent | prev [-] | | Do you think evolutionary pressures are the best explanation for why humans were able to posit the Poincaré conjecture and solve it? While our mental architecture evolved over a very long time, we still learn from miniscule amounts of data compared to LLMs. |
| |
| ▲ | stavros 2 hours ago | parent | prev | next [-] | | Most people have absorbed way too few books to be able to infer properly. Hell, most people are confused by TV remotes. | |
| ▲ | throw310822 3 hours ago | parent | prev [-] | | To be fair, the knowledge embedded in an LLM is also, at this point, a couple orders of magnitude (at least) larger than what the average human being can retain. So it's not like all those books and text in the internet are used just to bring them to our level, they go way beyond. |
|
|
| ▲ | staticassertion 5 hours ago | parent | prev | next [-] |
| > I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique. Because, empirically, we have numerous unique and differentiable qualities, obviously. Plenty of time goes into understanding this, we have a young but rigorous field of neuroscience and cognitive science. Unless you mean "fundamentally unique" in some way that would persist - like "nothing could ever do what humans do". > People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans. I frankly doubt it applies to either system. I'm a functionalist so I obviously believe that everything a human brain does is physical and could be replicated using some other material that can exhibit the necessary functions. But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do. |
| |
| ▲ | stavros 2 hours ago | parent [-] | | No, but it does mean that you should know we don't understand what intelligence is, and that maybe LLMs are actually intelligent and humans have the appearance of intelligence, for all we know. | | |
| ▲ | staticassertion an hour ago | parent [-] | | You're just defining intelligence as "undefined", which okay, now anything is anything. What is the point of that? Indeed, there's quite a lot of work that's been done on what these terms mean. The fields of neuroscience and cognitive science have contributed a lot to the area, and obviously there are major areas of philosophy that discuss how we should frame the conversation or seek to answer questions. We have more than enough, trivially, to say that human intelligence is distinct, so long as we take on basic assertions like "intelligence is related to brain structures" since we know a lot about brain structures. | | |
| ▲ | stavros an hour ago | parent [-] | | Our intelligence is related to brain structures, not all intelligence. You can't get to things like "what all intelligence, in general, is" from "what our intelligence is" any more than you can say that all food must necessarily be meat because sausages exist. | | |
| ▲ | staticassertion an hour ago | parent [-] | | But... we're talking about our intelligence. So obviously it's quite relevant. I didn't say that AI isn't intelligent, I said that we have good reason to believe that our intelligence is unique. And we do, a lot of good evidence. I obviously don't believe that all intelligence is related to specific brain structure. Again, I'm a functionalist, so I believe that any structure that can exhibit the necessary functions would be equivalent in regards to intelligence. None of this would commit me to (a) human exceptionalism (b) LLMs/ Agents being intelligent (c) LLMs/ Agents being intelligent in the way that humans are. | | |
| ▲ | stavros an hour ago | parent [-] | | This is too dependent on what you mean by "unique", though. What do we have that apes don't, and which directly enables intelligence? What do we have that LLMs don't? What do LLMs have that we don't? I don't think we know enough to definitively say "it's this bit that gives us intelligence, and there's no way to have intelligence without it". We just see what we have, and what animals lack, and we say "well it's probably some of these things maybe". | | |
| ▲ | staticassertion an hour ago | parent [-] | | > What do we have that apes don't, and which directly enables intelligence? Again, there are multiple fields of study with tons of amazingly detailed answers to this. We know about specific proteins, specific brain structures, we know about specific cognitive capabilities in the abstract, etc. > What do we have that LLMs don't? Again, quite a lot is already known about this. This feels a bit like you're starting to explore this area and you're realizing that intelligence is complex, but you may not realize that others have already been doing this work and we have a litany of information on the topic. There are big open questions, of course, but we're definitely past the point of being able to say "there is a difference between human and ape intelligence" etc. |
|
|
|
|
|
|
|
| ▲ | conz 6 hours ago | parent | prev | next [-] |
| Re: "I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique." Perhaps this might better help you understand why this assumption still holds:
https://en.wikipedia.org/wiki/Orchestrated_objective_reducti... |
| |
| ▲ | throw310822 3 hours ago | parent | next [-] | | "Controversial theory justifies assumption". Because humans never hallucinate. | |
| ▲ | staticassertion 5 hours ago | parent | prev [-] | | It doesn't. I actually completely reject that theory, and it's nice to see that Wikipedia notes that it is "controversial". There are extremely good reasons to reject this theory. For one thing, any quantum effects are going to be quite tiny/ trivial because the brain is too large, hot, wet, etc, to see larger effects, so you have to somehow make a leap to "tiny effects that last for no time at all" to "this matters fundamentally in some massive way". It likely requires rejection of functionalism, or the acceptance that quantum states are required for certain functions. Both of those are heavy commitments with the latter implying that there are either functions that require structures that can't be instantiated without quantum effects or functions that can't be emulated without quantum effects, both of which seem extremely unlikely to me. Probably for the far more important reason, it doesn't solve any problem. It's just "quantum woo, therefor libertarian free will" most of the time. It's mostly garbage, maybe a tiny tiny bit of interesting stuff in there. It also would do nothing to indicate that human intelligence is unique. |
|
|
| ▲ | nicman23 4 hours ago | parent | prev | next [-] |
| it is not the assumption that humans are unique. it is that statistical models cannot really think out of the box most of the time |
| |
|
| ▲ | slopinthebag 6 hours ago | parent | prev [-] |
| > I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique. Uh, because up until and including now, we are...? |
| |
| ▲ | virgildotcodes 6 hours ago | parent [-] | | Every living thing on Earth is unique. Every rock is unique in virtually infinite ways from the next otherwise identical rock. There are also a tremendous number of similarities between all living things and between rocks (and between rocks and living things). Most ways in which things are unique are arguably uninteresting. The default mode, the null hypothesis should be to assume that human intelligence isn't interestingly unique unless it can be proven otherwise. In these repeated discussions around AI, there is criticism over the way an AI solves a problem, without any actual critical thought about the way humans solve problems. The latter is left up to the assumption that "of course humans do X differently" and if you press you invariably end up at something couched in a vague mysticism about our inner-workings. Humans apparently create something from nothing, without the recombination of any prior knowledge or outside information, and they get it right on the first try. Through what, divine inspiration from the God who made us and only us in His image? | | |
| ▲ | gf000 3 hours ago | parent | next [-] | | Humans are obviously unique in an interesting way. People only "move the goalpost" because it's not an interesting question that humans can do some great stuff, the interesting question is where the boundary is. (Whether against animals or AI). Some example goals which makes human trivially superior (in terms of intelligence): invention of nuclear bomb/plants, theory of relativity, etc. | | |
| ▲ | stavros 2 hours ago | parent [-] | | But that's unique in the sense of "you have a bag of ten apples and I have a bag of eleven apples, therefore my bag is unique". It's not qualitatively different intelligence than a dog's, you just have more of it. | | |
| ▲ | gf000 an hour ago | parent [-] | | I would argue that point. The biological components are the same, but emergent behavior is a thing. So both the scale and the number of connections/way they connect have surpassed some limit after which cognitive capabilities increased severalfold to the point that humans "took over the world". And arguably further increase in intelligence seems to fall into a diminishing returns category, compared to this previous boom. (Someone being "2x smarter" doesn't give them enough benefit of reigning over others, at least history would look otherwise were it the case, in my opinion) Probably dumb example, but just by increasing speed you get well-behaving laminar flow vs turbulence, yet it's fundamentally the same a level beneath. | | |
| ▲ | stavros an hour ago | parent [-] | | Yeah, I don't know that there's such a jump. Dogs, for example, clearly communicate, both with us and with each other. They don't have language, but they also don't lack communication skills. To me, language is just "better communication" rather than a qualitatively different thing. |
|
|
| |
| ▲ | slopinthebag 6 hours ago | parent | prev | next [-] | | I doubt you can even define intelligence sufficiently to argue this point. Since that's an ongoing debate without a resolution thus far. But you claimed that humans aren't unique. I think it's pretty obvious we are on many dimensions including what you might classify as "intelligence". You don't even necessarily have to believe in a "soul" or something like that, although many people do. The capabilities of a human far surpass every single AI to date, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this. > There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical. Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers. | | |
| ▲ | virgildotcodes 2 hours ago | parent | next [-] | | > I doubt you can even define intelligence sufficiently to argue this point. Agreed. > But you claimed that humans aren't unique. I'm arguing that it is up to us to prove that they are interestingly unique in the context of this post. Which is pretty narrow - how do we solve problems? The theme I was arguing against that I've seen repeated throughout this thread is that AIs are just recombining things they've absorbed and throwing those recombinations at the wall until they see what sticks. It raises the question of why we presume that humans do things any differently, when it seems quite clear that we can only ever possibly do the same, unless we are claiming that knowledge of the universe can enter the human mind through some means other than through the known senses. Not at all disputing that humans possess many capabilities that AIs do not. > Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers. I touched on this elsewhere, will go ahead and paste it here again: The fundamental thing I'm speaking out against is the arrogance of human exceptionalism. This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over. Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions... I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position. | |
| ▲ | famouswaffles 5 hours ago | parent | prev | next [-] | | >The capabilities of a human far surpass every single AI to date What does this mean ? Are you saying every human could have achieved this result ?
Or this ? https://openai.com/index/new-result-theoretical-physics/ because well, you'd be wrong. >, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this. Human intelligence was brute forced. Please let's all stop pretending like those billions of years of evolution don't count and we poofed into existence. And you can keep parroting 'simulacrum of intelligence' all you want but that isn't going to make it any more true. | | |
| ▲ | slopinthebag 5 hours ago | parent [-] | | > The capabilities of a human far surpass every single AI to date Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable. Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence. Or else we'd be talking about how my calculator is intelligent. Of course computers can compute faster than we can, that's aside the point. > Human intelligence was brute forced. No, I don't mean how the intelligence evolved or was created. But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional. Evolution is not an intentional process, unless you believe in God or a creator of sorts, which is totally fair but probably not what you were intending. But my point is that LLM's essentially arrive at answers by brute force through search. Go look at what a reasoning model does to count the letters in a sentence, or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!). | | |
| ▲ | famouswaffles 4 hours ago | parent [-] | | >Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable. Really ? Every Human ? Are you sure ? because I certainly wouldn't ask just any human for the things I use these models for, and I use them for a lot of things. So, to me the idea that all humans are 'overwhelmingly more capable' is blatantly false. >Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence. What was achieved here or in the link I sent is not just "solving a math equation". >Or else we'd be talking about how my calculator is intelligent. If you said that humans are overwhelmingly more capable than calculators in arithmetic, well I'd tell you you were talking nonsense. >Of course computers can compute faster than we can, that's aside the point. I never said anything about speed. You are not making any significant point here lol >No, I don't mean how the intelligence evolved or was created. Well then what are you saying ? Because the only brute-forced aspect of LLM intelligence is its creation. If you do not mean that then just drop the point. >But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional. First of all, this makes no sense sorry. Evolution is regularly described as a brute force process by atheist and religious scientists alike. Second, I don't have any problem with people thinking we have a creator, although that instance still does necessarily mean a magic 'poof into existence' reality either. >But my point is that LLM's essentially arrive at answers by brute force through search. Sorry but that's just not remotely true. This is so untrue I honestly don't know what to tell you. This very post, with the transcript available is an example of how untrue it is. >or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!). Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time. | | |
| ▲ | gf000 3 hours ago | parent | next [-] | | > Really ? Every Human ? Yes, in many ways absolutely. Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases. > Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time. Isn't that just more proof how efficient the human brain is? Especially that a wire has much better properties than water solutions in bags. | | |
| ▲ | famouswaffles 2 hours ago | parent [-] | | >Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases. People use LLMs for a lot of things. 'Better Google' is is a tiny slice of that. >Isn't that just more proof how efficient the human brain is? Sure. So what ? If a game runs poorly on one hardware and excellently on another, does that mean the game was fundamentally different between the 2 devices ? No, Of course not. |
| |
| ▲ | slopinthebag 4 hours ago | parent | prev [-] | | I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us. Here might be some definitions of intelligence for example: > The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment. > "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills". > Goal-directed adaptive behavior. > a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation But even a housefly possesses levels of intelligence regarding flight and spacial awareness that dominates any LLM. Would it be fair to say a fly is more intelligent than an LLM? It certainly is along a narrow set of axis. > Because the only brute-forced aspect of LLM intelligence is its creation. I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. Not quite an exhaustive search, but massively compressed experience + pattern matching. But regardless, even if both forms of intelligence arrived via some form of brute force, what is more important to me is the result of that - how does the process of employing our intelligence look. > This very post, with the transcript available is an example of how untrue it is. The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really. > Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time. You're so close to getting it lol | | |
| ▲ | famouswaffles 3 hours ago | parent [-] | | >I never said that humans are better than LLM's along every axis. Rather, a reasonable definition of intelligence would necessarily encompass domains that LLM's are either incapable of or inferior to us. So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains ? That's not what overwhelming means. >I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. That is not really what “brute force” means. Pattern learning over a compressed representation of experience is not the same thing as exhaustive search. Calling any statistical method “brute force” just makes the term too vague to be useful. > what is more important to me is the result of that - how does the process of employing our intelligence look. But this is exactly where you are smuggling in assumptions. We do not actually understand the internal workings of either the human brain or frontier LLMs at the level needed to make confident claims like this. So a lot of what you are calling “the result” is really just your intuition about what intelligence is supposed to look like. And I do not think that distinction is as meaningful as you want it to be anyway. Flight is flight. Birds fly and planes fly. A plane is not a “simulacrum of flight” just because it achieves the same end by a different mechanism. >The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really. You do not need access to every internal representation to see that the model did not arrive at the answer by brute-forcing all possibilities. The observed behavior is already enough to rule that out. > Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time. >You're so close to getting it lol. No you don't understand what I'm saying. If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs, never mind humans. Does that mean how the brain works is wrong ? No it means we are dealing with 2 entirely different substrates and directly comparing efficiencies like that to show one is superior is silly. | | |
| ▲ | slopinthebag 3 hours ago | parent [-] | | > So all humans are overwhelmingly more intelligent but cannot even manage to be as capable in a significant number of domains When the amount of domains in which humans are more capable than LLM's vastly exceeds the amount of domains in which LLM's are more capable than humans, yes. I also agree that we don't have a great understanding of either human or LLM intelligence, but we can at least observe major differences and conclude that there are, in fact, major differences. In the same way we can conclude that both birds and planes have major differences, and saying that "there's nothing unique about birds, look at planes" is just a really weird thing to say. > If we were to be more accurate to the brain in silicon, it would be even less efficient than LLMs Do you think perhaps this massive difference points to there being a significant and foundational structural and functional difference between these types of intelligences? | | |
|
|
|
|
| |
| ▲ | blackcatsec 5 hours ago | parent | prev [-] | | > I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers. I think it comes from a position of arrogance/ego. I'll speak for the US here, since that's what I know the most; but the average 'techie' in general skews towards the higher intelligence numbers than the lower parts. This is a very, very broad stroke, and that's intentional to illustrate my point. Because of this, techie culture gains quite a bit of arrogance around it with regards to the masses. And this has been trained into tech culture since childhood. Whether it be adults praising us for being "so smart", or that we "figured out the VCR", or some other random tech problem that literally almost any human being can solve by simply reading the manual. What I've found, in the vast majority of technical problem solving cases that average people have challenges with, if they just took a few minutes to read a manual they'd be able to solve a lot of it themselves. In short, I don't believe as a very strong techie that I'm "smarter than most", but rather that I've taken the time to dive into a subject area that most other humans do not feel the need nor desire to do so. There are objectively hard problems in tech to solve, but the amount of people solving THOSE problems in the tech industry are few and far in between. And so the tech industry as a whole has spent the last decade or two spinning circles on increasingly complex systems to continue feeding their own egos about their own intelligence. We're now at a point that rather than solving the puzzle, most techies are creating incrementally complex puzzles to solve because they're bored of the puzzles that are in front of them. "Let me solve that puzzle by making a puzzle solver." "Okay, now let me make a puzzle solver creation tool to create puzzle solvers to solve the puzzle." and so forth and so forth. At the end of the day, you're still just solving a puzzle... But it's this arrogance that really bothers me in the tech bro culture world. And, more importantly, at least in some tech bro circles, they have realized that their target to gathering an exponential increase in wealth doesn't lie in creating new and novel ways to solve the same puzzles, but to try and tout AI as the greatest puzzle solver creation tool puzzle solver known to man (and let me grift off of it for a little bit). | | |
| ▲ | virgildotcodes 2 hours ago | parent | next [-] | | It's funny because the fundamental thing I'm speaking out against is the arrogance of human exceptionalism. This whole debate about what it means to be intelligent or human just seems like we're making the same mistakes we've made over and over. Earth as the center of the universe, sun as the center of the universe, man as the only animal with consciousness and intellect, the anthropomorphic nature of the majority of the deities in our religions and the anthropocentric purpose of the universe within those religions... I think this desire to believe that we are special, that the universe in some way does ultimately revolve around us, is seemingly a deep need in our psyche but any material analysis of our universe shows that it is extremely unlikely that we hold that position. | |
| ▲ | slopinthebag 4 hours ago | parent | prev [-] | | I largely agree with you, but I also see this same type of thinking appear in people who I know are not arrogant - at least in the techbroisk way. |
|
| |
| ▲ | gormen 5 hours ago | parent | prev [-] | | [dead] |
|
|