▲ | kazinator 4 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The problem with your reasoning is that some humans cannot solve the problem even without the irrelevant info about cats. We can easily cherry pick our humans to fit any hypothesis about humans, because there are dumb humans. The issue is that AI models which, on the surface, appear to be similar to the smarter quantile of humans in solving certain problems, become confused in ways that humans in that problem-solving class would not be. That's obviously because the language model is not generally intelligent it's just retrieving tokens from a high-dimensional statistically fit function. The extra info injects noise into the calculation which confounds it. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | krisoft 4 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> We can easily cherry pick our humans to fit any hypothesis about humans, because there are dumb humans. Nah. You would take a large number of humans, make half of them take the test with distractions and half without distracting statements and then you would compare their results statistically. Yes there would be some dumb ones, but as long as you test on enough people they would show up in both samples rougly at the same rate. > become confused in ways that humans in that problem-solving class would not be. You just state the same thing others are disputing. Do you think it will suddenly become convincing if you write it down a few more times? | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | Kuinox 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
That's obviously because the brain is not generally intelligent it's just retrieving concepts from a high-dimensional statistically fit function. The extra info injects noise into the calculation which confounds it. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|