Remix.run Logo
astrange 5 days ago

LLMs do not have internal reasoning, so the yapping is an essential part of producing a correct answer, insofar as it's necessary to complete the computation of it.

Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it.

typpilol 5 days ago | parent [-]

You can see a good example of this on the deep seek website chat when you enable thinking mode or whatever.

You can see it spews pages of pages before it answers.

astrange 5 days ago | parent [-]

My favorite is when it does all that thinking and then the answer completely doesn't use it.

Like if you ask it to write a story, I find it often considers like 5 plots or sets of character names in thinking, but then the answer is entirely different.

mailund 5 days ago | parent [-]

I've also noticed that when asking difficult questions, the real solution is somewhere in the pages of "reasoning", but not in the actual answer