| ▲ | norir 4 hours ago | |||||||
I suspect this is wrong. If you are correct, that implies to me that LLMs are not intelligent and just are exceptionally well tuned to echo back their training data. It makes no sense to me that a superior intelligence would be unable to trivially learn a new language syntax and apply its semantic knowledge to the new syntax. So I believe that either LLMs will improve to the point that they will easily pick up a new language or we will realize that LLMs themselves are the dead end. | ||||||||
| ▲ | nrhrjrjrjtntbt an hour ago | parent | next [-] | |||||||
> superior intelligence You are talking about the future. But if we are talking about the future the bitter lesson applies even more so. The super intelligence doesnt need a special programming language to be more productive. It can use Python for everything and write bug free correct code fast. | ||||||||
| ▲ | tyushk 3 hours ago | parent | prev | next [-] | |||||||
I don't think your ultimatum holds. Even assuming LLMs are capable of learning beyond their training data, that just lead back to the purpose of practice in education. Even if you provide a full, unambiguous language spec to a model, and the model were capable of intelligently understanding it, should you expect its performance with your new language to match the petabytes of Python "practice" a model comes with? | ||||||||
| ||||||||
| ▲ | legostormtroopr 2 hours ago | parent | prev [-] | |||||||
> If you are correct, that implies to me that LLMs are not intelligent and just are exceptionally well tuned to echo back their training data. Yes. This is exactly how LLMs work. For a given input, an LLM will output a non-deterministic response that approximates its training data. LLMs aren’t intelligent. And it isn’t that they don’t learn, they literally cannot learn from their experience in real time. | ||||||||
| ||||||||