| ▲ | iamleppert 5 hours ago | |
I keep hearing people say "but as humans we actually understand". What evidence do you have of the material differences in what understanding an LLM has, and what version a human has? What processes do we fundamentally do, that an LLM does not or cannot do? What here is the definition of "understanding", that, presumably an LLM does not currently do, that humans do? | ||
| ▲ | kneel25 2 hours ago | parent | next [-] | |
Well a material difference is we don’t input/output in tokens I guess. We have a concept of gaps and limits to knowledge, we have factors like ego, preservation, ambition that go into our thoughts where LLM just has raw data. Understanding the implication of a code change is having an idea of a desired structure, some idea of where you want to head to and how that meshes together. LLM has zero of any of that. Just because it can copy the output of the result of those factors I mention doesn’t mean they operate the same. | ||
| ▲ | mcpar-land 4 hours ago | parent | prev [-] | |
https://ml-site.cdn-apple.com/papers/the-illusion-of-thinkin... | ||