| ▲ | famouswaffles 20 hours ago | |
I get irritated when people act like they know what they are talking about but then it's just nonsense they keep spitting out. I'm honestly sick of it. There's a fair amount of LLM interpretability research out there. If you're actually interested in knowing better then go read them. I'll even link what i find interesting. All this talk of lookup tables is nonsensical. You have no idea what you're talking about. >It doesn't even "know" what the actual text continuation must be, strictly speaking. It just returns a list of probabilities that we must select. It can't select it itself. To go from "list of probabilities" to "chatbot" requires adding additional hardcoded code (no AI involved) that greatly influences how the chatbot behaves, feels. Imagine if an actual sentient being had a button: you press it, and suddenly Steven the sailor becomes a Chinese lady who discusses Confucius. Or starts saying random gibberish. There's no independent agency whatsoever. It's all a bunch of clever tricks. You are not making any sense here. Producing a probability distribution over next tokens is the model’s decision procedure. Sampling is just the readout rule for turning that distribution into a concrete sequence. Yes, decoding choices affect style, creativity, determinism, and failure modes. That is true. It does not follow that the model is therefore “just tricks” or that the intelligence-like behavior lives outside the network. >In an actual brain, the structure of the connectome itself drives a lot of behavior. In an LLM, all connections are static and predefined. A brain is much more resistant to failure. In an LLM changing a single hypersensitive neuron can lead to a full model collapse. There are humans who live normal lives with a full hemisphere removed. You are moving goalposts. Fact is: randomly corrupting a system damages it. This is not a meaningful test of whether a system is "truly intelligent." Random lesions to human cortex are also catastrophic. The hemispherectomy cases you mention involve surgical removal of diseased tissue with significant neural reorganization over time, not random weight corruption. That's not even a fair comparison. LLMs are also deeply redundant. If they weren't, techniques like quantization or layer pruning wouldn't work. | ||