| ▲ | adamtaylor_13 4 hours ago | ||||||||||||||||
If LLMs just regurgitate compressed text, they'd fail on any novel problem not in their training data. Yet, they routinely solve them, which means whatever's happening between input and output is more than retrieval, and calling it "not understanding" requires you to define understanding in a way that conveniently excludes everything except biological brains. | |||||||||||||||||
| ▲ | sfn42 3 hours ago | parent | next [-] | ||||||||||||||||
Yes there are some fascinating emergent properties at play, but when they fail it's blatantly obvious that there's no actual intelligence nor understanding. They are very cool and very useful tools, I use them on a daily basis now and the way I can just paste a vague screenshot with some vague text and they get it and give a useful response blows my mind every time. But it's very clear that it's all just smoke and mirrors, they're not intelligent and you can't trust them with anything. | |||||||||||||||||
| |||||||||||||||||
| ▲ | varispeed 4 hours ago | parent | prev [-] | ||||||||||||||||
They don't solve novel problems. But if you have such strong belief, please give us examples. | |||||||||||||||||