| ▲ | garciasn 6 hours ago | |||||||||||||||||||||||||||||||
Depends on the definition of reasoning: 1) think, understand, and form judgments by a process of logic. —- LLMs do not think, nor do they understand; they also cannot form ‘judgments’ in any human-relatable way. They’re just providing results in the most statistically relevant way their training data permits. 2) find an answer to a problem by considering various possible solutions —- LLMs can provide a result that may be an answer after providing various results that must be verified as accurate by a human, but they don’t do this in any human-relatable way either. —- So; while LLMs continue to be amazing mimics, thus they APPEAR to be great at ‘reasoning’, they aren’t doing anything of the sort, today. | ||||||||||||||||||||||||||||||||
| ▲ | CamperBob2 5 hours ago | parent [-] | |||||||||||||||||||||||||||||||
Exposure to our language is sufficient to teach the model how to form human-relatable judgements. The ability to execute tool calls and evaluate the results takes care of the rest. It's reasoning. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||