| ▲ | tim333 2 days ago | |
No, they haven't agreed because there was never a practical definition of the test. Turing had a game: >It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. >We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? (some bits removed) It was done more as thought experiment. As a practical test it would probably be too easy to fake with ELIZA type programs to be a good test. So computers could probably pass but it's not really hard enough for most people's idea of AI. | ||
| ▲ | sillyfluke a day ago | parent [-] | |
The definition seems to suffice if you give the interrogator as much time as they want and don't limit their world knowledge, which the definition that you cited doesn't seem to limit or constrain? By "world knowledge" I mean any knowledge that includes and is not limited to knowledge about how the machine works and its limitations. Therefore if the machine can't fool Alan Turing specifically then it fails even though it might have fooled some random Joe who's been living under a rock. Hence since current LLMs are bound to hallucinate given enough time and seem not to able to maintain a conversation context window as robustly as humans, they would inevitably fail? | ||