Remix.run Logo
impossiblefork 21 hours ago

They generate non-output tokens that help correct generation. It is meaningful to call that reasoning.

After all, it can, with whatever secret tricks Google and OpenAI have, be used to solve IMO level maths problems.

If solving IMO problems can be done without reasoning, then what would be reasoning?

southernplaces7 18 hours ago | parent [-]

>It is meaningful to call that reasoning.

This take on the nature of sentience and consciounsess, things we humans have, know we have, and which are quite distinct from unaware pattern-matching, is becoming foolish and tedious.

No, it's not meaningful to call that reasoning. Indeed it's wrong, because it's not reasoning. That would require sentience, and absolutely zero evidence indicates its presence in any LLM. Are some people so glittered by their algorithmic tricks with communication that they simply fall to near religious beliefs in a conversational LLM being an example of awareness?

computers have been solving mathematical problems for decades. Would you thus argue that those machines were also reasoning?

impossiblefork 6 hours ago | parent [-]

But consider it like this: the model lives in a reward environment where it's tasked with outputting prescribed text or outputting the answer to certain questions.

Instead of just outputting the answer it generates non-output tokens based on which the probability of the answer that got it rewards before are increased.

Is this not a sort of reasoning? It looks ahead at imagined things and tries to gauge what will get it the reward?