| ▲ | sillyfluke a day ago | |
The definition seems to suffice if you give the interrogator as much time as they want and don't limit their world knowledge, which the definition that you cited doesn't seem to limit or constrain? By "world knowledge" I mean any knowledge that includes and is not limited to knowledge about how the machine works and its limitations. Therefore if the machine can't fool Alan Turing specifically then it fails even though it might have fooled some random Joe who's been living under a rock. Hence since current LLMs are bound to hallucinate given enough time and seem not to able to maintain a conversation context window as robustly as humans, they would inevitably fail? | ||