Remix.run Logo
ben_w 16 hours ago

Not saying this to defend the models as your point is fundamentally sound, but IIRC the user-visible "thoughts" are produced by another LLM summarising the real chain-of-thought, so weird inversions of what it's "really" "thinking" may well slip in at the user-facing level — the real CoT often uses completely illegible shorthand of its own, some of which is Chinese even when the prompt is in English, but even the parts in the users' own languages can be hard-to-impossible to interpret.

To agree with your point, even with the real CoT researchers have shown that model's CoT workspace don't accurately reflect behaviour: https://www.anthropic.com/research/reasoning-models-dont-say...