Remix.run Logo
efitz 3 days ago

Nothing that we consider intelligent works like LLMs.

Brains are continuous - they don’t stop after processing one set of inputs, until a new set of inputs arrives.

Brains continuously feed back on themselves. In essence they never leave training mode although physical changes like myelination optimize the brain for different stages of life.

Brains have been trained by millions of generations of evolution, and we accelerate additional training during early life. LLMs are trained on much larger corpuses of information and then expected to stay static for the rest of their operational life; modulo fine tuning.

Brains continuously manage context; most available input is filtered heavily by specific networks designed for preprocessing.

I think that there is some merit that part of achieving AGI might involve a systems approach, but I think AGI will likely involve an architectural change to how models work.