Remix.run Logo
andrepd 13 hours ago

I have no idea what this means, can someone give the eli5?

a_bonobo 10 hours ago | parent | next [-]

Anthropic has a nice press release that summarises it in simpler terms: https://www.anthropic.com/research/reasoning-models-dont-say...

meesles 12 hours ago | parent | prev | next [-]

Ask an LLM!

otabdeveloper4 8 hours ago | parent | prev [-]

I don't either, but chain of thought is obviously bullshit and just more LLM hallucination.

LLMs will routinely "reason" through a solution and then proceed to give out a final answer that is completely unrelated to the preceding "reasoning".

aqfamnzc 7 hours ago | parent [-]

It's more hallucination in the sense that all LLM output is hallucination. CoT is not "what the llm is thinking". I think of it as just creating more context/prompt for itself on the fly, so that when it comes up with a final response it has all that reasoning in its context window.