Remix.run Logo
b40d-48b2-979e 4 hours ago

LLMs don't "reason".

thot_experiment 4 hours ago | parent | next [-]

Why is this a meaningful distinction to you? What does "reason" mean here? Can we construct a test that cleanly splits what humans do from what LLMs do?

grey-area 4 hours ago | parent [-]

Sure, things like counting the ‘r’s in strawberry, for example (till they are retrained not to make that mistake).

thot_experiment 3 hours ago | parent [-]

There are humans that can't do that but are clearly capable of reasoning. Not a meaningful categorical split.

bensyverson 4 hours ago | parent | prev [-]

Take it up with OpenAI's API designers—it's their term