▲ | MostlyStable 2 days ago | ||||||||||||||||||||||||||||||||||||||||||||||
I always find the "It's just..." arguments amusing. It presupposes that we know what any intelligence, including our own "is". Human intelligence can just as trivially be reduced down to "it's just a bench of chemical/electrical gradients". We don't understand how our (or any) intelligence functions, so acting like a next-token predictor can't be "real" intelligence seems overly confident. | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | mossTechnician a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
In theory, I don't mind waxing philosophical about the nature of humanity. But in practice, I regularly become uncomfortable when I see people compare (for example) the waste output of an LLM chatbot to a human being, with their own carbon footprint, who needs to eat and breathe. I worry because it suggests the additional environmental waste of the LLM is justified, and almost insinuates that the human is a waste on society if their output doesn't exceed the LLM. But if the LLM were intelligent and sentient, and it was our equal... I believe it is worse than slavery to keep it imprisoned the way it is: unconscious, only to be jolted awake, asked a question, and immediately rendered unconscious again upon producing a result. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | tracerbulletx 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Ugh you just fancy auto-completed a sequence of electrical signals from your eyes into a sequence of nerve impulses in your fingers to say that, and how do I know you're not hallucinating, last week a different human told me an incorrect fact and they were totally convinced they were right! | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | eamsen 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
Completely agree with this statement. I would go further, and say we don't understand how next-token predictors work either. We understand the model structure, just as we do with the brain, but we don't have a complete map of the execution patterns, just as we do not with the brain. Predicting the next token can be as trivial as a statistical lookup or as complex as executing a learned reasoning function. My intuition suggests that my internal reasoning is not based on token sequences, but it would be impossible to convey the results of my reasoning without constructing a sequence of tokens for communication. | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | th0ma5 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
That's literally the definition of unfalsifiable though. It is equally valid to say that anything claiming to be "real" intelligence is overly confident. | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | unclebucknasty 2 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
That's an interesting take. I agreed with your first paragraph, but didn't expect the conclusion. From my perspective, the statement that these technologies are taking us to AGI is the overly confident part, particularly WRT the same lack of understanding you mentioned. I mean, from just a purely odds perspective, what are the chances that human intelligence is, of all things, a simple next-token predictor? But, beyond that, I do believe that we observably know that it's much more than that. |