Remix.run Logo
garciasn 6 hours ago

Depends on the definition of reasoning:

1) think, understand, and form judgments by a process of logic.

—- LLMs do not think, nor do they understand; they also cannot form ‘judgments’ in any human-relatable way. They’re just providing results in the most statistically relevant way their training data permits.

2) find an answer to a problem by considering various possible solutions

—- LLMs can provide a result that may be an answer after providing various results that must be verified as accurate by a human, but they don’t do this in any human-relatable way either.

—-

So; while LLMs continue to be amazing mimics, thus they APPEAR to be great at ‘reasoning’, they aren’t doing anything of the sort, today.

CamperBob2 5 hours ago | parent [-]

Exposure to our language is sufficient to teach the model how to form human-relatable judgements. The ability to execute tool calls and evaluate the results takes care of the rest. It's reasoning.

garciasn 5 hours ago | parent [-]

SELECT next_word, likelihood_stat FROM context ORDER BY 2 DESC LIMIT 1

is not reasoning; it just appears that way due to Clarke’s third law.

int_19h 4 hours ago | parent | next [-]

Sure, at the end of the day it selects the most probable token - but it has to compute the token probabilities first, and that's the part where it's hard to see how it could possibly produce a meaningful log like this without some form of reasoning (and a world model to base that reasoning on).

So, no, this doesn't actually answer the question in a meaningful way.

4 hours ago | parent [-]
[deleted]
CamperBob2 5 hours ago | parent | prev [-]

(Shrug) You've already had to move your goalposts to the far corner of the parking garage down the street from the stadium. Argument from ignorance won't help.