| ▲ | datsci_est_2015 2 hours ago | ||||||||||||||||||||||||||||||||||
> Talking about how they find it hard to say they aren't sure of something is a much more interesting limitation to talk about, for example. Sure, thank you for steelmanning my argument. I didn’t think I needed to actually spell out all of the fundamental limitations of LLMs in this specific thread. They are spoken at length across the web, but are often met with pushback, which was my entire point. Here’s another one: LLMs do not have a memory property. Shut off the power and turn it back on and you lose all context. Any “memory” feature implemented by companies that sell LLM wrappers are a hack on top of how LLMs work, like seeding a context window before letting the user interact with the LLM. | |||||||||||||||||||||||||||||||||||
| ▲ | stavros 2 hours ago | parent [-] | ||||||||||||||||||||||||||||||||||
But that's also like saying "humans don't have a memory property, any 'memory' is in the hippocampus". It's not useful to say that "an LLM you don't bother to keep training has no memory". Of course it doesn't, you removed its ability to form new memories! | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||