▲ | hnuser123456 a day ago | |||||||
By the very act of acknowledging you made a mistake, you are in fact updating your neurons to impact your future decision making. But that is flat out impossible the way LLMs currently run. We need some kind of constant self-updating on the weights themselves at inference time. | ||||||||
▲ | semiquaver a day ago | parent [-] | |||||||
Humans have short term memory. LLMs have context windows. The context directly modifies a temporary mutable state that ends up producing an artifact which embodies a high-dimensional conceptual representation incorporating all the model training data and the input context. Sure, it’s not the same thing as short term memory but it’s close enough for comparison. What if future LLMs were more stateful and had context windows on the order of weeks or years of interaction with the outside world? | ||||||||
|