| ▲ | voxleone 18 hours ago | |
>> An LLM can't even learn mid-conversation. There’s an implicit assumption that scaling text models alone gets us to human-like intelligence, but that seems unlikely without grounding in multiple sensory domains and a unified world model. What’s interesting is that if we do go down that route successfully, we may get systems with something like internal experience or agency. At that point, the ethical frame changes quite a bit. | ||