| ▲ | anonymous908213 4 hours ago | |||||||
Addendum: > With recent advances in AI, it becomes ever harder for proponents of intelligence-as-understanding to continue asserting that those tools have no clue and “just” perform statistical next-token prediction. ??????? No, that is still exactly what they do. The article then lists a bunch of examples in which this in trivially exactly what is happening. > “The cat chased the . . .” (multiple connections are plausible, so how is that not understanding probability?) It doesn't need to "understand" probability. "The cat chased the mouse" shows up in the distribution 10 times. "The cat chased the bird" shows up in the distribution 5 times. Absent any other context, with the simplest possible model, it now has a probability of 2/3 for the mouse and 1/3 for the bird. You can make the probability calculations as complex as you want, but how could you possibly trout this out as an example that an LLM completing this sentence isn't a matter of trivial statistical prediction? Academia needs an asteroid, holy hell. [I originally edited this into my post, but two people had replied by then, so I've split it off into its own comment.] | ||||||||
| ▲ | n4r9 4 hours ago | parent [-] | |||||||
One question is how do you know that you (or humans in general) aren't also just applying statistical language rules, but are convincing yourself of some underlying narrative involving logical rules? I don't know the answer to this. | ||||||||
| ||||||||