Remix.run Logo
woeirua 3 hours ago

>As a quick aside, I am not going to entertain the notion that LLMs are intelligent, for any value of “intelligent.” They are robots. Programs. Fancy robots and big complicated programs, to be sure — but computer programs, nonetheless. The rest of this essay will treat them as such. If you are already of the belief that the human mind can be reduced to token regurgitation, you can stop reading here. I’m not interested in philosophical thought experiments.

I can't imagine why someone would want to openly advertise that they're so closed minded. Everything after this paragraph is just anti-LLM ranting.

Cloudef 2 hours ago | parent | next [-]

What's wrong about the statement? The black box algorithm might have been generated by machine learning, but it's still a computer program in the end.

woopsn 19 minutes ago | parent | next [-]

A provocative aside in bad faith, anyway a completely minor point within the overall post, which some of the people he's telling to fuck off might have read

km3r an hour ago | parent | prev [-]

Because it's so entirely reductive and misunderstanding of where the technology has progressed. Hello world is s computer program. So it Microsoft Windows. New levels of "intelligence" unlock with greater complexity of a program.

Like look at our brains. We know decently well how a single neuron works. We can simulate a single one with "just a computer program". But clearly with enough layers some form of complexity can emerge, and at some level that complexity becomes intelligence.

andsoitis an hour ago | parent [-]

> with enough layers some form of complexity can emerge, and at some level that complexity becomes intelligence.

It isn’t a given that complexity begets intelligence.

DiogenesKynikos 19 minutes ago | parent | next [-]

But in the case of both biological and computer neurons, it is an empirical fact that complexity has led to intelligence.

PaulDavisThe1st an hour ago | parent | prev [-]

and it isn't a given that it doesn't, so maybe a little openness towards the possibility is warranted?

andsoitis an hour ago | parent [-]

I’m open, but the comment I responded to asserted: “complexity becomes intelligence”, as if it is a fact. And it isn’t proven.

DiogenesKynikos 18 minutes ago | parent [-]

We have LLMs, which are obviously intelligent. How is it not proven?

PaulDavisThe1st 16 minutes ago | parent [-]

There is no "obvious" about it, unless you define "intelligent" in a rather narrow (albeit Turing-esque) way.

The suspicion is that they are good at predicting next-token and not much else. This is still a research topic at this point, from my reading.

hodgehog11 3 hours ago | parent | prev | next [-]

I disagree that the majority of it is anti-LLM ranting, there are several subtle points here that are grounded in realism. You should read on past the first bit if you're judging mainly from the initial (admittedly naive) first few paragraphs.

jbotz 35 minutes ago | parent | next [-]

> You should read on past the first bit...

Not GP, but... the author said explicitly "if you believe X you should stop reading". So I did.

The X here is "that the human mind can be reduced to token regurgitation". I don't believe that exactly, and I don't believe that LLMs are conscious, but I do believe that what the human mind does when it "generates text" (i.e. writes essays, programs, etc) may not be all that different from what an LLM does. And that means that most of humans's creations are also the "plagiarism" in the same sense the author uses here, which makes his argument meaningless. You can't escape the philosophical discussion he says that he's not interested in if you want to talk about ethics.

Edit: I'd like to add that I believe that this also ties in to the heart of the philosophy of Open Source and Open Science... if we acknowledge that our creative output is 1% creative spark and 99% standing on the shoulders of Giants, then "openness" is a fundamental good, and "intellectual property" is at best a somewhat distasteful necessity that should be as limited as possible and at worst is outright theft, the real plagiarism.

woeirua 2 hours ago | parent | prev [-]

I read the rest of it. It was intellectually lazy.

measurablefunc an hour ago | parent [-]

It's more intellectually lazy to think boolean logic at a sufficient scale crosses some event horizon wherein its execution on mechanical gadgets called computers somehow adds up to intelligence beyond human understanding.

wolrah 3 hours ago | parent | prev | next [-]

> I can't imagine why someone would want to openly advertise that they're so closed minded.

I would say the exact same about you, rejecting an absolutely accurate and factual statement like that as closed minded strikes me as the same as the people who insist that medical science is closed minded about crystals and magnets.

I can't imagine why someone would want to openly advertise they think LLMs are actual intelligence, unless they were in a position to benefit financially from the LLM hype train of course.

PaulDavisThe1st an hour ago | parent | next [-]

I have no financial position w.r.t. LLMs in any way that I am aware of (it is possible that some of the mutual funds I put money into have investments in companies that work with LLMs, but I know of no specifics there).

I am not ready to say that "LLMs are actual intelligence", and most of their publically visible uses seem to me to be somewhere between questionable and ridiculous.

Nevertheless, I retain a keen ... shall we call it anti-skepticism? ... that LLMs, by modelling language, may have accidentally modelled/created a much deeper understanding of the world than was ever anticipated.

I do not want LLMs to "succeed", I think a society in which they are common is a worse society than the one in which we lived 5 years ago (as bad as that was), but my curiosity is not abated by such feelings.

woeirua 2 hours ago | parent | prev [-]

Cool, so clearly articulate the goal posts. What do LLMs have to do to convince you that they are intelligent? If the answer is there is no amount of evidence that can change your mind, then you're not arguing in good faith.

shinycode an hour ago | parent | next [-]

It’s maybe an ethical and identity problem for most people. The idea that something not grounded in biology has somewhat the same « quality of intelligence » as us is disturbing. It rises so many uncomfortable questions like, should we accept to be dominated and governed by a higher intelligence, should we keep it « slave » or give it « deserved freedom ». Are those questions grounded in reality or intelligence is just decoupled from the realm of biology and we don’t have to consider them at all. Only biological « being » with emotions/qualia should be considered relevant as regards to intelligence which does not matter on its own but only if it embodies qualia ? It’s very new and a total shift in paradigm of life it’s hard to ask people to be in good faith here

PaulDavisThe1st an hour ago | parent [-]

But you don't and cannot know if qualia exist in a system, so how can that ever be a criteria for any kind of qualification?

shinycode 8 minutes ago | parent [-]

That’s the main problem isn’t it ? Because it does matter and there is consequences to that like, should you « unplug » from the grid an AI ? Should we erase the memories of AI ? We eat animals and forbid eating humans, why ? Could we let AI « eat » some of us like in the matrix ?

Should we consider it our equal or superior to us ? Should we give it the reigns of politics if it’s superior in decision making ? Or maybe the premise is « given all the knowledge that exists coupled with a good algorithm, you look/are/have intelligence » ? In which case intelligence is worthless in a way. It’s just a characteristic, not a quality. Which makes AI fantastic tools and never our equal ?

rmunn an hour ago | parent | prev [-]

Maybe, I don't know, not be based on a statistical model?

Come on. If you are actually entertaining the idea that LLMs can possibly be intelligent, you don't know how they work.

But to take your silly question seriously for a minute, maybe I might consider LLMs to be capable of intelligence if they were able to learn, if they were able to solve problems that they weren't explicitly trained for. For example, have an LLM read a bunch of books about the strategy of Go, then actually apply that knowledge to beat an experienced Go player who was deliberately playing unconventional, poor strategies like opening in the center. Since pretty much nobody opens their Go game in the center (the corners are far superior), the LLM's training data is NOT going to have a lot of Go openings where one player plays mostly in the center. At which point you'll see that the LLM isn't actually intelligent, because an intelligent being would have understood the concepts in the book that you should mostly play in the corners at first in order to build territory with the smallest number of moves. But when faced with unconventional moves that aren't found anywhere on the Internet, the LLM would just crash and burn.

That would be a good test of intelligence. Learning by reading books, and then being able to apply that knowledge to new situations where you can't just regurgitate the training material.

PaulDavisThe1st an hour ago | parent [-]

Have you seen the now-years-old transcripts of "ancient" LLMs inventing new languages with grammar and syntax structures completely different from our own?

palmotea 2 hours ago | parent | prev | next [-]

> I can't imagine why someone would want to openly advertise that they're so closed minded.

It's not being closed-minded. It's not wanting to get sea-lioned to death by obnoxious people.

PaulDavisThe1st 43 minutes ago | parent [-]

[WARNING: seriously off-topic comment, I was triggered]

Here's what sea-lioned means to me:

I say something.

You accuse me of sea-lioning.

I have two choices: attempt to refute the sea-lioning, which becomes sea-lioning, or allowing your accusation to stand unchallenged, which appears to most people as a confirmation of some kind that I was sea-lioning.

It is a nuclear weapon launched at discussion. It isn't that it doesn't describe a phenomena that actually happens in the world. However, it is a response/accusation to which there is never any way to respond to that doesn't confirm the accusation, whether it was true or not.

It is also absolutely rooted in what appears to me to be a generational distinction: it seems that a bunch of younger people consider it to be a right to speak "in public" (i.e in any kind of online context where people who do not know you can read what you wrote) and expect to avoid a certain kind of response. Should that response arise? Various things will be said about the responder, including "sea-lioning".

My experience is that people who were online in the 80s and 90s find this expectation somewhere between humorous and ridiculous, and that people who went online somewhere after about 2005 do not.

Technologically, it seems to reflect a desire among many younger people for "private-public spaces". In the absence of any such actual systems really existing (at least from their POV), they believe they ought to be able to use very non-private public spaces (facebook, insta, and everything else under the rubric of "social media") as they wish to, rather than as the systems were designed. They are communicating with their friends and the fact that their conversations are visible is not significant. Thus, when a random stranger responds to their not-private-public remarks ... sea-lioning.

We used to have more systems that were sort-of-private-public spaces - mailing lists being the most obvious. I sympathize with a generation that clearly wants more of these sorts of spaces to communicate with friends, but I am not sympathetic to their insistence that corporate creations that are not just very-much-non-private-public spaces but also essentially revenue generators should work the way they want them to.

Ygg2 3 hours ago | parent | prev | next [-]

> I can't imagine why someone would want to openly advertise that they're so closed minded.

Because humans often anthropomorphize completely inert things? E.g. a coffee machine or a bomb disposal robot.

So far whatever behavior LLMs have shown is basically fueled by Sci-Fi stories of how a robot should behave under such and such.

acjohnson55 3 hours ago | parent | prev [-]

It was actually much less anti LLM than I was expecting from the beginning.

But I agree that it is self limiting to not bother to consider the ways that LLM inference and human thinking might be similar (or not).

To me, they seem do a pretty reasonable emulation of single- threaded thinking.

Zardoz84 an hour ago | parent [-]

They are not similar. A LLM is a complex statistical machine. A brain is a highly complex neural network. A brain, is more similar the perceptron of some AMD CPUs that to a LLM.

PaulDavisThe1st 33 minutes ago | parent [-]

I would recommend investigating how contemporary LLMs actually work.

Possibly start with something like: https://transformer-circuits.pub/2025/attribution-graphs/bio...