Remix.run Logo
nvch 5 days ago

We are constantly learning (updating the network) in addition to inference. Quite possibly that our brains allocate more resources to learning than to inference.

Perhaps AI companies don’t know how to run continuous learning on their models:

* it’s unrealistic to do it for one big model because it will instantly start shifting in an unknown direction

* they can’t make millions of clones of their model, run them separately and set them free like it happens with humans

Mars008 4 days ago | parent [-]

> possibly that our brains allocate more resources to learning than to inference

It's likely in brain inference is learning. If you want a technical analog it's like a conversation in LLMs. Previous tokens do affect the currently generated. I.e. it's inference time learning, well known and widely used.