| ▲ | fabianhjr 3 days ago | |||||||
> “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” ~ Edsger W. Dijkstra The point of the Turing Test is that if there is no extrinsic difference between a human and a machine the intrinsic difference is moot for practical purposes. That is not an argument to whether a machine (with linear algebra, machine learning, large language models, or any other method) can think or what constitutes thinking or consciousness. The Chinese Room thought experiment is a compliment on the intrinsic side of the comparison: https://en.wikipedia.org/wiki/Chinese_room | ||||||||
| ▲ | tim333 2 days ago | parent | next [-] | |||||||
I kind of agree but I think the point is what people mean by words is vague, so he said: >Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. which is can you tell the AI answers from the humans ones in a test. It then becomes an experimental result rather than what you mean by 'think' or maybe by 'extrinsic difference'. | ||||||||
| ▲ | rcxdude 2 days ago | parent | prev [-] | |||||||
The Chinese Room is a pretty useless thought exercise I think. It's an example which if you believe machines can't think seems like an utterly obvious result, and if you believe machines can think it's just obviously wrong. | ||||||||
| ||||||||