▲ | altruios 3 days ago | |
I have trouble reconciling this point with the known phenomenon of hallucinations. I would suppose the correct test is an 'infinite' Turing test, which after a long enough conversation, LLM's invariably do not pass, as they eventually degrade. I think a better measure for the binary answer of "have they passed the Turing test?" is the metric of 'For how long do they continue to pass the Turing test?"... This ignores such ideas of probing the LLM's weak spots. Since they do not 'see' their input as characters, and instead as tokens, counting letters in words, or specifics about those sub-token division provides a shortcut (for now) to failing the Turing test. But the above approach is not in the spirit of the Turing test, as that only points out a blind spot in their perception, like how a human would have to guess a bit at what things would look like if UV and infrared were added to our visual field... sure we could reason about it, but we wouldn't actually perceive those wavelengths, so we could make mistakes about that qualia. And it would say nothing of our ability to think if we could not perceive those wavelengths, even if 'more-seeing' entities judged us as inferior for it... | ||
▲ | throwawaylaptop 3 days ago | parent [-] | |
I date a lot of public school teachers for some reason (hey, once you have a niche it's easy to relate and they like you), and I assure you you'd have a better more human conversation with an LLM than with most middle school teachers. |