| ▲ | SAI_Peregrinus 2 hours ago | |||||||
> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” Interestingly enough, it sort of did! Not Turing's original test where an interviewer attempts to determine which of a human & a computer is the human, but the P.T. Barnum "there's a sucker born every minute" version common in the media: if the computer can fool some of the people into thinking it's thinking like a human does, it passes the P.T Barnum Turing test! The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they'll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to. | ||||||||
| ▲ | jmalicki 11 minutes ago | parent [-] | |||||||
I see AI pass the turning test all the time, since humans are constantly falsely being accused of being an AI. It doesn't mean that AI got good, just that humans are thinking other humans are AI, which is a form of passing the test. The adversarial version with humans involved is actually easier to pass because of this - because real actual humans wouldn't pass your non adversarial version. | ||||||||
| ||||||||