| ▲ | rcarr 7 hours ago | ||||||||||||||||
Not an expert but surely it's only a matter of time until there's a way to update with the latest information without having to retrain on the entire corpus? | |||||||||||||||||
| ▲ | computably 2 hours ago | parent | next [-] | ||||||||||||||||
On a technical level, sure, you could say it's a matter of time, but that could mean tomorrow, or in 20 years. And even after that, it still doesn't really solve the intrinsic problem of encoding truth. An LLM just models its training data, so new findings will be buried by virtue of being underrepresented. If you brute force the data/training somehow, maybe you can get it to sound like it's incorporating new facts, but in actuality it'll be broken and inconsistent. | |||||||||||||||||
| ▲ | Filligree 4 hours ago | parent | prev [-] | ||||||||||||||||
It’s an extremely difficult problem, and if you know how to do that you could be a billionaire. It’s not impossible, obviously—humans do it—but it’s not yet certain that it’s possible with an LLM-sized architecture. | |||||||||||||||||
| |||||||||||||||||