▲ | astrange 5 days ago | ||||||||||||||||
LLMs do not have internal reasoning, so the yapping is an essential part of producing a correct answer, insofar as it's necessary to complete the computation of it. Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it. | |||||||||||||||||
▲ | typpilol 5 days ago | parent [-] | ||||||||||||||||
You can see a good example of this on the deep seek website chat when you enable thinking mode or whatever. You can see it spews pages of pages before it answers. | |||||||||||||||||
|