| ▲ | strbean 11 hours ago |
| You realize parent said "This would be an interesting way to test proposition X" and you responded with "X is false because I say say", right? |
|
| ▲ | viccis 8 hours ago | parent | next [-] |
| Yes. That is correct. If I told you I planned on going outside this evening to test whether the sun sets in the east, the best response would be to let me know ahead of time that my hypothesis is wrong. |
| |
| ▲ | strbean 8 hours ago | parent [-] | | So, based on the source of "Trust me bro.", we'll decide this open question about new technology and the nature of cognition is solved. Seems unproductive. | | |
| ▲ | viccis 7 hours ago | parent [-] | | In addition to what I have posted elsewhere in here, I would point to the fact that this is not indeed an "open question", as LLMs have not produced an entirely new and more advanced model of physics. So there is no reason to suppose they could have done so for QM. | | |
|
|
|
| ▲ | anonymous908213 10 hours ago | parent | prev [-] |
| "Proposition X" does not need testing. We already know X is categorically false because we know how LLMs are programmed, and not a single line of that programming pertains to thinking (thinking in the human sense, not "thinking" in the LLM sense which merely uses an anthromorphized analogy to describe a script that feeds back multiple prompts before getting the final prompt output to present to the user). In the same way that we can reason about the correctness of an IsEven program without writing a unit test that inputs every possible int32 to "prove" it, we can reason about the fundamental principles of an LLM's programming without coming up with ridiculous tests. In fact the proposed test itself is less eminently verifiable than reasoning about correctness; it could be easily corrupted by, for instance, incorrectly labelled data in the training dataset, which could only be determined by meticulously reviewing the entirety of the dataset. The only people who are serious about suggesting that LLMs could possibly 'think' are the people who are committing fraud on the scale of hundreds of billions of dollars (good for them on finding the all-time grift!) and people who don't understand how they're programmed, and thusly are the target of the grift. Granted, given that the vast majority of humanity are not programmers, and even fewer are programmers educated on the intricacies of ML, the grift target pool numbers in the billions. |
| |
| ▲ | strbean 8 hours ago | parent [-] | | > We already know X is categorically false because we know how LLMs are programmed, and not a single line of that programming pertains to thinking (thinking in the human sense, not "thinking" in the LLM sense which merely uses an anthromorphized analogy to describe a script that feeds back multiple prompts before getting the final prompt output to present to the user). Could you elucidate me on the process of human thought, and point out the differences between that and a probabilistic prediction engine? I see this argument all over the place, but "how do humans think" is never described. It is always left as a black box with something magical (presumably a soul or some other metaphysical substance) inside. | | |
| ▲ | anonymous908213 8 hours ago | parent | next [-] | | There is no need to involve souls or magic. I am not making the argument that it is impossible to create a machine that is capable of doing the same computations as the brain. The argument is that whether or not such a machine is possible, an LLM is not such a machine. If you'd like to think of our brains as squishy computers, then the principle is simple: we run code that is more complex than a token prediction engine. The fact that our code is more complex than a token prediction engine is easily verified by our capability to address problems that a token prediction engine cannot. This is because our brain-code is capable of reasoning from deterministic logical principles rather than only probabilities. We also likely have something akin to token prediction code, but that is not the only thing our brain is programmed to do, whereas it is the only thing LLMs are programmed to do. | |
| ▲ | viccis 8 hours ago | parent | prev [-] | | Kant's model of epistemology, with humans schematizing conceptual understanding of objects through apperception of manifold impressions from our sensibility, and then reasoning about these objects using transcendental application of the categories, is a reasonable enough model of thought. It was (and is I think) a satisfactory answer for the question of how humans can produce synthetic a priori knowledge, something that LLMs are incapable of (don't take my word on that though, ChatGPT is more than happy to discuss [1]) 1: https://chatgpt.com/share/6965653e-b514-8011-b233-79d8c25d33... |
|
|