Remix.run Logo
colechristensen an hour ago

No, they just need to be trained to have adversarial self review "thinking" processes.

You ask an LLM "What's wrong with your answer?" and you get pretty good results.

binary0010 43 minutes ago | parent [-]

Or you get the original output result was perfect and the adversarial "rethinking" switches to an incorrect result.