Remix.run Logo
withinboredom 3 days ago

That's not exactly true. Every time you start a new conversation; you get a new LLM for all intents. Asking an LLM about an unrelated topic towards the end of a ~500 page conversation will get you vastly different results than at the beginning. If we could get to multi-thousand page contexts, it would probably be less accurate than a human, tbh.

jbstack 3 days ago | parent [-]

Yes, I should have clarified that I was referring to memory of training data, not of conversations.

withinboredom a day ago | parent [-]

Training data also deteriorates quite quickly as the context gets longer.