▲ | coldtea 4 days ago | |
>Roughly, actual intelligence needs to maintain a world model in its internal representation And how's that not like stored information (memories) and weighted links between each and/or between groups of them? | ||
▲ | sixo 4 days ago | parent [-] | |
It probably is a lot like that! I imagine it's a matter of specializing the networks and learning algorithms to converge to world-model-like-structures rather than language-like-ones. All these models do is approximate the underlying manifold structure, just, the manifold structure of a causal world is different from that of language. |