▲ | gloosx 2 days ago | |||||||
1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new 2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment | ||||||||
▲ | imtringued 2 days ago | parent [-] | |||||||
Minor nitpicks. I think your points are pretty good. 1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing. 2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs) | ||||||||
|