| |
| ▲ | datsci_est_2015 2 hours ago | parent [-] | | > Talking about how they find it hard to say they aren't sure of something is a much more interesting limitation to talk about, for example. Sure, thank you for steelmanning my argument. I didn’t think I needed to actually spell out all of the fundamental limitations of LLMs in this specific thread. They are spoken at length across the web, but are often met with pushback, which was my entire point. Here’s another one: LLMs do not have a memory property. Shut off the power and turn it back on and you lose all context. Any “memory” feature implemented by companies that sell LLM wrappers are a hack on top of how LLMs work, like seeding a context window before letting the user interact with the LLM. | | |
| ▲ | stavros 2 hours ago | parent [-] | | But that's also like saying "humans don't have a memory property, any 'memory' is in the hippocampus". It's not useful to say that "an LLM you don't bother to keep training has no memory". Of course it doesn't, you removed its ability to form new memories! | | |
| ▲ | datsci_est_2015 an hour ago | parent [-] | | So why then do we stop training LLMs and keep them stored at a specific state? Is it perhaps because the results become terrible and LLMs have a delicate optimal state for general use? This sounds like an even worse case for a model of intelligence. | | |
| ▲ | stavros an hour ago | parent [-] | | Nope, it's not that, but it's nice of you to offer a straw man. Makes the argument flow better. | | |
| ▲ | datsci_est_2015 an hour ago | parent [-] | | Not entirely a straw man. What is the purpose of storing and retrieving LLMs at a fixed state if not to guarantee a specific performance? Wouldn’t a strong model of intelligence be capable of, to extend your analogy, running without having its hippocampus lobotomized? Given the precariousness of managing LLM context windows, I don’t think it’s particularly unfair to assume that LLMs that learn without limit become very unstable. To steelman, if it’s possible, it may be prohibitively expensive. But somehow I doubt it’s possible. | | |
| ▲ | stavros an hour ago | parent [-] | | It is, indeed, prohibitively expensive. But it's not impossible. The proof is in the fact that you can fine-tune LLMs. |
|
|
|
|
|
|