▲ | mschuster91 13 hours ago | |||||||||||||
> This kind of context management is not that hard, even when building LLMs. It is, at least if you wish to be in the meatspace, that's my point. Every day has 86400 seconds during which a human brain constantly adapts to and learns from external input - either directly as it's being awake or indirectly during nighttime cleanup processes. On top of that, humans have built-in filters for training. Basically, we see some drunkard shouting about the Hollow Earth on the sidewalk... our brain knows that this is a drunkard and that Hollow Earth is absolutely crackpot material, so if it stores anything at all then the fact that there is a drunkard on that street and one might take another route next time, but the drunkard's rambling is forgotten maybe five minutes later. AI, in contrast, needs to be hand-held by humans during training that annotate, "grade" or weigh information during the compilation of the training dataset, in order that the AI knows what is written in "Mein Kampf" so it can answer questions upon it, but that it also knows (or at least: won't openly regurgitate) that the solution to economic problems isn't to just deport Jews. And huge context windows aren't the answer either. My wife says me, she would like to have a fruit cake for her next birthday. I'll probably remember that piece of information (or at the very least I'll write it down)... but an AI butler? I'd be really surprised if this is still in its context space in a year, and even if it is, I would not be surprised if it weren't able to recall that fact. And the final thing is prompts... also not the answer. We've seen it just a few days ago with Grok - someone messed with the system prompt so it randomly interjected "white genocide" claims into completely unrelated conversation [1] despite hopefully being trained on a ... more civilised dataset, and to the contrary, we've also seen Grok reply to Twitter questions in a way that suggest that it is aware its training data is biased. [1] https://www.reuters.com/business/musks-xai-updates-grok-chat... | ||||||||||||||
▲ | sigmoid10 13 hours ago | parent [-] | |||||||||||||
>Every day has 86400 seconds during which a human brain constantly adapts to and learns from external That's not even remotely true. At least not in the sense that it is for context in transformer models. Or can you tell me all the visual and auditory inputs you experienced yesterday at the 45232nd second? You only learn permanently and effectively from particular stimulation coupled with surprise. That has a sample rate which is orders of magnitude lower. And it's exactly the kind of sampling that can be replicated with a run-of-the-mill persistent memory system for an LLM. I would wager that you could fit most people's core experiences and memories that they can randomly access at any moment into a 1000 page book - something that fits well into state of the art context windows. For deeper more detailed things you can always fall back to another system. | ||||||||||||||
|