| |
| ▲ | Kim_Bruning 43 minutes ago | parent | next [-] | | Can we maybe make it "don't anthropoCENTRIZE the LLMs" . The inverse of anthropomorphism isn't any more sane, you see. BY analogy: just because a drone is not an airplane, doesn't mean it can't fly. :-p LLMs absolutely have intent (their current task) and reasoning (what else is step-by-step doing?) . Call it simulated intent and simulated reasoning if you must. Meanwhile they also have the property where if they have the ability to destroy all your data, they absolutely will find a way. Like kittens or puppies, they're ruthless trouble-finders, and you can't even blame the LLM; 'cause an inference time LLM doesn't respond to punishment the same way a vertebrate does. (because the most analogous loop to that was only available at training time) | |
| ▲ | coldtea an hour ago | parent | prev [-] | | That is not that strong an argument as it seems, because we too might very well be "a series of weights for probable next tokens". The main difference is the training part and that it's always-on. | | |
| ▲ | naikrovek 38 minutes ago | parent | next [-] | | We are much more than weights which output probable next tokens. You are a fool if you think otherwise. Are we conscious beings? Who knows, but we’re more than a neural network outputting tokens. Firstly, and most obviously, we aren’t LLMs, for Pete’s sake. There are parts of our brains which are understood (kinda) and there are parts which aren’t. Some parts are neural networks, yes. Are all? I don’t know, but the training humans get is coupled with the pain and embarrassment of mistakes, the ability to learn while training (since we never stop training, really), and our own desires to reach our own goals for our own reasons. I’m not spiritual in any way, and I view all living beings as biological machines, so don’t assume that I am coming from some “higher purpose” point of view. | | |
| ▲ | Kim_Bruning 12 minutes ago | parent [-] | | They're not artificial intelligence neural networks. They're biological neural networks. Brains are made of neurons (which Do The Thing... mysteriously, somehow. Papers are inconclusive!) , Glia Cells (which support the neurons), and also several other tissues for (obvious?) things like blood vessels, which you need to power the whole thing, and other such management hardware. Bioneurons are a bit more powerful than what artificial intelligence folks call 'neurons' these days. They have built in computation and learning capabilities. For some of them, you need hundreds of AI neurons to simulate their function even partially. And there's still bits people don't quite get about them. But weights and prediction? That's the next emergence level up, we're not talking about hardware there. That said, the biological mechanisms aren't fully elucidated, so I bet there's still some surprises there. |
| |
| ▲ | bigstrat2003 an hour ago | parent | prev | next [-] | | That is a silly point. We very clearly are not "a series of weights for probable next tokens", as we can reason based on prior data points. LLMs cannot. | |
| ▲ | nothinkjustai an hour ago | parent | prev [-] | | We very obviously are not just a series of weights for probable next tokens. Like seriously, you can even ask an LLM and it will tell you our brains work differently to it, and that’s not even including the possibility that we have a soul or any other spiritual substrait. | | |
| ▲ | fc417fc802 an hour ago | parent | next [-] | | Our brains work differently, yes. What evidence do you have that our brains are not functionally equivalent to a series of weights being used to predict the next token? I'm not claiming that to be the case, merely pointing out that you don't appear to have a reasonable claim to the contrary. > not even including the possibility that we have a soul or any other spiritual substrait. If we're going to veer off into mysticism then the LLM discussion is also going to get a lot weirder. Perhaps we ought to stick to a materialist scientific approach? | | |
| ▲ | nothinkjustai 41 minutes ago | parent | next [-] | | You are setting the bar in a way that makes “functional equivalence” unfalsifiable. If by “functionally equivalent” you mean “can produce similar linguistic outputs in some domains,” then sure we’re already there in some narrow cases. But that’s a very thin slice of what brains do, and thus not functionally equivalent at all. There are a few non-mystical, testable differences that matter: - Online learning vs. frozen inference: brains update continuously from tiny amounts of data, LLMs do not - Grounding: human cognition is tied to perception, action, and feedback from the world. LLMs operate over symbol sequences divorced from direct experience. - Memory: humans have persistent, multi-scale memory (episodic, procedural, etc.) that integrates over a lifetime. LLM “memory” is either weights (static) or context (ephemeral). - Agency: brains are part of systems that generate their own goals and act on the world. LLMs optimize a fixed objective (next-token prediction) and don’t have endogenous drives. | |
| ▲ | an hour ago | parent | prev | next [-] | | [deleted] | |
| ▲ | CPLX an hour ago | parent | prev [-] | | What evidence do you have that a sausage is not functionally equivalent to a cucumber? | | |
| ▲ | fc417fc802 an hour ago | parent | next [-] | | I don't follow. If you provide criteria I can most likely provide evidence, unless your criteria is "vaguely cylindrical and vaguely squishy" in which case I obviously won't be able to. The person I replied to made a definite claim (that we are "very obviously not ...") for which no evidence has been presented and which I posit humanity is currently unable to definitively answer in one direction or the other. | |
| ▲ | trinsic2 6 minutes ago | parent | prev [-] | | LOL. Its pointless to argue with people like this. It reminds me of people that belief the earth is flat. Once you are convinced of something, dammed awareness about the opposite, there is no changing that position |
|
| |
| ▲ | skeledrew an hour ago | parent | prev [-] | | Its really just a matter of degrees. There are 1 million, 1 million, 1 trillion parameter LLMs... and you keep scaling those parameters and you eventually get to humans. But it's still probable next tokens (decisions) based on previous tokens (experience). | | |
| ▲ | trinsic2 8 minutes ago | parent | next [-] | | LOL. Oook.. No i dont think so. The human experience and the mechanisms behind it have a lot of unknowns and im pretty sure that trying to confine the human experience into the amount of parameters there are is short sighted. | |
| ▲ | simonh 27 minutes ago | parent | prev [-] | | They’re both neural networks, but the architectures built using those neural connections, and the way they are trained and operate are completely different. There are many different artificial neural network architectures. They’re not all LLMs. AlphaZero isn’t a LLM. There are Feed Forward networks, recurrent networks, convolutional networks, transformer networks, generative adversarial networks. Brains have many different regions each with different architectures. None of them work like LLMs. Not even our language centres are structured or trained anything like LLMs. |
|
|
|
|