| ▲ | danaris 3 days ago |
| But you are now arguing against a strawman, namely, "it is not possible to construct a computer that thinks". The argument that was actually made was "LLMs do not think". |
|
| ▲ | umanwizard 3 days ago | parent [-] |
| A: X, because Y B: But Y would also imply Z C: A was never arguing for Z! This is a strawman! |
| |
| ▲ | danaris 3 days ago | parent [-] | | "LLMs cannot think like brains" does not imply "no computer it will ever be possible to construct could think like a brain". | | |
| ▲ | umanwizard 3 days ago | parent [-] | | “LLMs cannot think like brains” is “X”. | | |
| ▲ | danaris 2 days ago | parent [-] | | That appears to be your own assumptions coming into play. Everything I've seen says "LLMs cannot think like brains" is not dependent on an argument that "no computer can think like a brain", but rather on an understanding of just what LLMs are—and what they are not. |
|
|
|