| ▲ | strbean 8 hours ago | |
> We already know X is categorically false because we know how LLMs are programmed, and not a single line of that programming pertains to thinking (thinking in the human sense, not "thinking" in the LLM sense which merely uses an anthromorphized analogy to describe a script that feeds back multiple prompts before getting the final prompt output to present to the user). Could you elucidate me on the process of human thought, and point out the differences between that and a probabilistic prediction engine? I see this argument all over the place, but "how do humans think" is never described. It is always left as a black box with something magical (presumably a soul or some other metaphysical substance) inside. | ||
| ▲ | anonymous908213 8 hours ago | parent | next [-] | |
There is no need to involve souls or magic. I am not making the argument that it is impossible to create a machine that is capable of doing the same computations as the brain. The argument is that whether or not such a machine is possible, an LLM is not such a machine. If you'd like to think of our brains as squishy computers, then the principle is simple: we run code that is more complex than a token prediction engine. The fact that our code is more complex than a token prediction engine is easily verified by our capability to address problems that a token prediction engine cannot. This is because our brain-code is capable of reasoning from deterministic logical principles rather than only probabilities. We also likely have something akin to token prediction code, but that is not the only thing our brain is programmed to do, whereas it is the only thing LLMs are programmed to do. | ||
| ▲ | viccis 7 hours ago | parent | prev [-] | |
Kant's model of epistemology, with humans schematizing conceptual understanding of objects through apperception of manifold impressions from our sensibility, and then reasoning about these objects using transcendental application of the categories, is a reasonable enough model of thought. It was (and is I think) a satisfactory answer for the question of how humans can produce synthetic a priori knowledge, something that LLMs are incapable of (don't take my word on that though, ChatGPT is more than happy to discuss [1]) 1: https://chatgpt.com/share/6965653e-b514-8011-b233-79d8c25d33... | ||