▲ | imtringued 2 days ago | |
Minor nitpicks. I think your points are pretty good. 1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing. 2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs) | ||
▲ | gloosx 17 hours ago | parent [-] | |
Neuromorphic chips are looking cool, they simulate plasticity — but the circuits are fixed. You can’t sprout a new synaptic route or regrow a broken connection. To self-rewire is not just merely changing your internal state or connections. To self-rewire means to physically grow or shrink new neurons, synapses or pathways, externally, acting from within. This is not looking realistic with the current silicon design. The point is about unsupervised learning. Once an LLM is trained, its weights are frozen — it won’t update itself during a chat. Prompt-driven Inference is immediate, not persistent, you can define a term or concept mid-chat and it will behave as if it learned it, but only until the context window ends. If it was the other way all models would drift very quickly. |