It is far more accurate to say LLMs are collapsing or reducing response probabilities for a given input, than any kind of “thinking” or “reasoning”.