| ▲ | legostormtroopr 2 hours ago | |
> If you are correct, that implies to me that LLMs are not intelligent and just are exceptionally well tuned to echo back their training data. Yes. This is exactly how LLMs work. For a given input, an LLM will output a non-deterministic response that approximates its training data. LLMs aren’t intelligent. And it isn’t that they don’t learn, they literally cannot learn from their experience in real time. | ||
| ▲ | nrhrjrjrjtntbt an hour ago | parent [-] | |
There is some intellegence. It can figure stuff out and solve problems. It isnt copy paste. But I agree with your point. They are not intellegent enough to learn during inference. Which is the main point here. | ||