▲ | mft_ 7 days ago | |||||||
Sure, your points about the body aren’t wrong, but (as you say) LLMs are only modelling a small subset of a brain’s functions at the moment: applied knowledge, language/communication, and recently interpretation of visual data. There’s no need or opportunity for an LLM (as they currently exist) to do anything further. Further, just because additional inputs exist in the human body (gut-brain axis, for example) it doesn’t mean that they are especially (or at all) relevant for knowledge/language work. | ||||||||
▲ | TheOtherHobbes 7 days ago | parent [-] | |||||||
The point is that knowledge/language work can't work reliably unless it's grounded in something outside of itself. Without it you don't get an oracle, you get a superficially convincing but fundamentally unreliable idiot savant who lacks a stable sense of self, other, or real world. The fundamental foundation of science and engineering is reliability. If you start saying reliability doesn't matter, you're not doing science and engineering any more. | ||||||||
|