Remix.run Logo
mschuster91 13 hours ago

> This kind of context management is not that hard, even when building LLMs.

It is, at least if you wish to be in the meatspace, that's my point. Every day has 86400 seconds during which a human brain constantly adapts to and learns from external input - either directly as it's being awake or indirectly during nighttime cleanup processes.

On top of that, humans have built-in filters for training. Basically, we see some drunkard shouting about the Hollow Earth on the sidewalk... our brain knows that this is a drunkard and that Hollow Earth is absolutely crackpot material, so if it stores anything at all then the fact that there is a drunkard on that street and one might take another route next time, but the drunkard's rambling is forgotten maybe five minutes later.

AI, in contrast, needs to be hand-held by humans during training that annotate, "grade" or weigh information during the compilation of the training dataset, in order that the AI knows what is written in "Mein Kampf" so it can answer questions upon it, but that it also knows (or at least: won't openly regurgitate) that the solution to economic problems isn't to just deport Jews.

And huge context windows aren't the answer either. My wife says me, she would like to have a fruit cake for her next birthday. I'll probably remember that piece of information (or at the very least I'll write it down)... but an AI butler? I'd be really surprised if this is still in its context space in a year, and even if it is, I would not be surprised if it weren't able to recall that fact.

And the final thing is prompts... also not the answer. We've seen it just a few days ago with Grok - someone messed with the system prompt so it randomly interjected "white genocide" claims into completely unrelated conversation [1] despite hopefully being trained on a ... more civilised dataset, and to the contrary, we've also seen Grok reply to Twitter questions in a way that suggest that it is aware its training data is biased.

[1] https://www.reuters.com/business/musks-xai-updates-grok-chat...

sigmoid10 13 hours ago | parent [-]

>Every day has 86400 seconds during which a human brain constantly adapts to and learns from external

That's not even remotely true. At least not in the sense that it is for context in transformer models. Or can you tell me all the visual and auditory inputs you experienced yesterday at the 45232nd second? You only learn permanently and effectively from particular stimulation coupled with surprise. That has a sample rate which is orders of magnitude lower. And it's exactly the kind of sampling that can be replicated with a run-of-the-mill persistent memory system for an LLM. I would wager that you could fit most people's core experiences and memories that they can randomly access at any moment into a 1000 page book - something that fits well into state of the art context windows. For deeper more detailed things you can always fall back to another system.

bluesroo 9 hours ago | parent | next [-]

Your definition of "learning" is incomplete because you're applying LLM concepts to how human brains work. An LLM only "learns" during training. From that point forward all it has is its context and vector DBs. If an LLM and vector DB is not actively interacted with, nothing happens to it. However for the brain, experiencing IS learning. And the brain NEVER stops experiencing.

Just because I don't remember my experiences at second 45232 on May 22, doesn't mean that my brain was not actively adapting to my experiences at that moment. The brain does a lot more learning than just what is conscious. And then when I went to sleep the brain continued pruning and organizing my unconscious learning for the day.

Seeing if someone can go from token to freeform physical usefulness will be interesting. I'm of the belief that LLMs are too verbose and energy intensive to go from language regurgitation machines to moving in the real world according to free form prompting. It may be accomplishable with the vast amount of hype investment, but I think the energy requirements and latency will make an LLM-based approach economically infeasible.

ewoodrich 7 hours ago | parent | prev [-]

> You only learn permanently and effectively from particular stimulation coupled with surprise.

This is just, not true. A single 2min conversation with emotional or intellectual resonance can significantly alter a human’s thought process for years. There are some topics where every time they come up directly or analogously I can recall something a teacher told me in high school that “stuck” with me for whatever reason. And it isn’t even a “core” experience, just something that instantly clicked for my brain and altered my problem solving. At the time, there’s no heuristic that could predict how or why that particular interaction should have that kind of staying power.

Not to mention, experiences that subtly alter thinking or behavior just by virtue of providing some baseline familiarity instead of blank slate problem solving or routine. Like how you subtly adjust how you interact with coworkers based on the culture of your current company over time vs the last without any “flash” of insight required.