Remix.run Logo
FrustratedMonky 4 days ago

Right now LLMs are like students that study for years, then get their brains frozen into a textbook before they’re released. They can read new stuff during use (context window), but they don’t actually update their core weights on the fly. The “infinite context window” dream would mean every interaction is remembered and folded back into the brain, seamlessly blending inference (using the model) with training (reshaping it).

Within 2–3 years, we’ll see practical “personal LLMs” with effectively infinite memory via retrieval + lightweight updates, feeling continuous but not actually rewriting the core brain.

Within 5–10 years, we’ll likely get true continual-learning systems that can safely update weights live, with mechanisms to prune bad habits and compress knowledge—closer to how a human learns daily.

The rub is less can we and more should we: infinite memory + unfiltered feedback loops risks building a paranoid mirror that learns every user’s quirks, errors, and biases as gospel. In other words, your personal live-updating LLM might become your eccentric twin.