| ▲ | aqfamnzc 7 months ago | ||||||||||||||||
It's more hallucination in the sense that all LLM output is hallucination. CoT is not "what the llm is thinking". I think of it as just creating more context/prompt for itself on the fly, so that when it comes up with a final response it has all that reasoning in its context window. | |||||||||||||||||
| ▲ | ziofill 7 months ago | parent [-] | ||||||||||||||||
Exactly, whether or not it’s the “actual thought” of the model, it does influence its final output, so it matters to the user. | |||||||||||||||||
| |||||||||||||||||