Remix.run Logo
MostlyStable 2 days ago

I always find the "It's just..." arguments amusing. It presupposes that we know what any intelligence, including our own "is". Human intelligence can just as trivially be reduced down to "it's just a bench of chemical/electrical gradients".

We don't understand how our (or any) intelligence functions, so acting like a next-token predictor can't be "real" intelligence seems overly confident.

mossTechnician a day ago | parent | next [-]

In theory, I don't mind waxing philosophical about the nature of humanity. But in practice, I regularly become uncomfortable when I see people compare (for example) the waste output of an LLM chatbot to a human being, with their own carbon footprint, who needs to eat and breathe. I worry because it suggests the additional environmental waste of the LLM is justified, and almost insinuates that the human is a waste on society if their output doesn't exceed the LLM.

But if the LLM were intelligent and sentient, and it was our equal... I believe it is worse than slavery to keep it imprisoned the way it is: unconscious, only to be jolted awake, asked a question, and immediately rendered unconscious again upon producing a result.

deadbabe a day ago | parent [-]

Worrying about if an LLM is intelligent and sentient is not much different than worrying the same thing about an AWS lambda function.

tracerbulletx 2 days ago | parent | prev | next [-]

Ugh you just fancy auto-completed a sequence of electrical signals from your eyes into a sequence of nerve impulses in your fingers to say that, and how do I know you're not hallucinating, last week a different human told me an incorrect fact and they were totally convinced they were right!

adamredwoods 2 days ago | parent | next [-]

Humans base their "facts" on consensus-driven education and knowledge. Anything that falls into a range of "I think this is true" or "I read this somewhere" or "I have a hunch" is more acceptable for a human than an LLM. Also humans are more often to encapsulate their uncertain answers with phrasing. LLMs can't do this, they don't have a way to track answers that are possibly incorrect.

deadbabe 2 days ago | parent | prev [-]

The human believes it was right.

The LLM doesn’t believe it was right or wrong. It doesn’t believe anything anymore than a mathematical function believes 2+2=4.

tracerbulletx 2 days ago | parent | next [-]

Obviously LLMs are missing many important properties of the brain like spatial, time, and chemical factors, as well as many different inter connected feedback networks to different types of neural networks that go well beyond what llms do.

Beyond that, they are the same thing. Signal Input -> Signal Output

I do not know what consciousness actually is so I will not speak to what it will take for a simulated intelligence to have one.

Also I never used the word believes, I said convinced, if it helps I can say "acted in a way as if it had high confidence in its output"

cratermoon a day ago | parent [-]

Obviously sand is missing many important properties of integrated circuits, like semiconductivity, electric interconnectivity, transistors, and p-n junctions.

Beyond that, they are the same thing.

istjohn 2 days ago | parent | prev [-]

Can you support that assertion? What's your evidence?

cratermoon a day ago | parent [-]

not the OP but https://www.tandfonline.com/doi/abs/10.1080/0951508070123951...

eamsen 2 days ago | parent | prev | next [-]

Completely agree with this statement.

I would go further, and say we don't understand how next-token predictors work either. We understand the model structure, just as we do with the brain, but we don't have a complete map of the execution patterns, just as we do not with the brain.

Predicting the next token can be as trivial as a statistical lookup or as complex as executing a learned reasoning function.

My intuition suggests that my internal reasoning is not based on token sequences, but it would be impossible to convey the results of my reasoning without constructing a sequence of tokens for communication.

th0ma5 2 days ago | parent | prev | next [-]

That's literally the definition of unfalsifiable though. It is equally valid to say that anything claiming to be "real" intelligence is overly confident.

unclebucknasty 2 days ago | parent | prev [-]

That's an interesting take. I agreed with your first paragraph, but didn't expect the conclusion.

From my perspective, the statement that these technologies are taking us to AGI is the overly confident part, particularly WRT the same lack of understanding you mentioned.

I mean, from just a purely odds perspective, what are the chances that human intelligence is, of all things, a simple next-token predictor?

But, beyond that, I do believe that we observably know that it's much more than that.