|
| ▲ | nrhrjrjrjtntbt an hour ago | parent | next [-] |
| > superior intelligence You are talking about the future. But if we are talking about the future the bitter lesson applies even more so. The super intelligence doesnt need a special programming language to be more productive. It can use Python for everything and write bug free correct code fast. |
|
| ▲ | tyushk 3 hours ago | parent | prev | next [-] |
| I don't think your ultimatum holds. Even assuming LLMs are capable of learning beyond their training data, that just lead back to the purpose of practice in education. Even if you provide a full, unambiguous language spec to a model, and the model were capable of intelligently understanding it, should you expect its performance with your new language to match the petabytes of Python "practice" a model comes with? |
| |
| ▲ | lovidico 2 hours ago | parent [-] | | Further to this, you can trivially observe two further LLM weaknesses:
1. that LLMs are bad at weird syntax even with a complete description. E.g. writing StandardML and similar languages, or any esolangs.
2. Even with lots of training data, LLMs cannot generalise their output to a shape that doesn’t resemble their training. E.g. ask the LLM to write any nontrivial assembler code like an OS bootstrap. LLMs aren’t a “superior intelligence” because every abstract concept they “learn” is done so emergently. They understand programming concepts within the scope of languages and tasks that easily map back to those things, and due to finite quantisation they can’t generalise those concepts from first principles. I.e. it can map python to programming concepts, but it can’t map programming concepts to an esoteric language with any amount of reliability. Try doing some prompting and this becomes agonisingly apparent! |
|
|
| ▲ | legostormtroopr 2 hours ago | parent | prev [-] |
| > If you are correct, that implies to me that LLMs are not intelligent and just are exceptionally well tuned to echo back their training data. Yes. This is exactly how LLMs work. For a given input, an LLM will output a non-deterministic response that approximates its training data. LLMs aren’t intelligent. And it isn’t that they don’t learn, they literally cannot learn from their experience in real time. |
| |
| ▲ | nrhrjrjrjtntbt an hour ago | parent [-] | | There is some intellegence. It can figure stuff out and solve problems. It isnt copy paste. But I agree with your point. They are not intellegent enough to learn during inference. Which is the main point here. |
|