Remix.run Logo
dlivingston 16 hours ago

The Chain of Thought in the reasoning models (o3, R1, ...) will actually express some self-doubt and backtrack on ideas. That tells me there's a least some capability for self-doubt in LLMs.

genewitch 10 hours ago | parent [-]

That's not sslf-doubt, that's programmed in.

A Poorman's "thinking" hack was to edit the context of the ai reply to where you wanted it to think and truncate it there, and append a carriage return and "Wait..." Then hit generate.

It was expensive because editing context isn't, you have to resend (and it has to re-parse) the entire context.

This was injected into the thinking models, I hope programmatically.