| ▲ | kenjackson 3 hours ago | ||||||||||||||||
The fallacy here is the assumption that humans know why we do what we do. Much like modern LLMs we have an explanation, but it’s just something we cook up in our brain. Whether or not it’s the truth is far more complex. Oddly, despite LLMs being these huge networks with billions of parameters, we still probably do understand it better than we do our own brains. | |||||||||||||||||
| ▲ | Barrin92 3 hours ago | parent [-] | ||||||||||||||||
>The fallacy here is the assumption that humans know why we do what we do. Much like modern LLMs we have an explanation Human brains and cognition do not work like LLMs, but that aside that's irrelevant. Existing machines can explain what they did, that's why we built them. As Dijkstra points out in his essay on 'the foolishness of natural language programming', the entire point of programming is: (https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...) "The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid." So to 'program' in English, when you had an in comparison error free and unambiguous way to express yourself is like in his words 'avoiding math for the sake of clarity'. | |||||||||||||||||
| |||||||||||||||||