Remix.run Logo
okanat 5 hours ago

Congrats on discovering what "thinking" models do internally. That's how they work, they generate "thinking" lines to feed back on themselves on top of your prompt. There is no way of separating it.

perching_aix 5 hours ago | parent [-]

If you think that confusing message provenance is part of how thinking mode is supposed to work, I don't know what to tell you.

otabdeveloper4 3 hours ago | parent [-]

There is no "message provenance" in LLM machinery.

This is an illusion the chat UX concocts. Behind the scenes the tokens aren't tagged or colored.

perching_aix 2 hours ago | parent [-]

I am aware. That is not what the guy above was suggesting, nor what was I.

Things generally exist without an LLM receiving and maintaining a representation about them.

If there's no provenance information and message separation currently being emitted into the context window by tooling, the latter part of which I'd be surprised by, and the models are not trained to focus on it, then what I'm suggesting is that these could be inserted and the models could be tuned, so that this is then mitigated.

What I'm also suggesting is that the above person's snark-laden idea of thinking mode, and how resolvable this issue is, is thus false.