| ▲ | datsci_est_2015 2 hours ago | |
Not entirely a straw man. What is the purpose of storing and retrieving LLMs at a fixed state if not to guarantee a specific performance? Wouldn’t a strong model of intelligence be capable of, to extend your analogy, running without having its hippocampus lobotomized? Given the precariousness of managing LLM context windows, I don’t think it’s particularly unfair to assume that LLMs that learn without limit become very unstable. To steelman, if it’s possible, it may be prohibitively expensive. But somehow I doubt it’s possible. | ||
| ▲ | an hour ago | parent | next [-] | |
| [deleted] | ||
| ▲ | stavros 2 hours ago | parent | prev [-] | |
It is, indeed, prohibitively expensive. But it's not impossible. The proof is in the fact that you can fine-tune LLMs. | ||