Remix.run Logo
danpasca 12 hours ago

I might be wrong but based on the videos I've watched from Karpathy, this would, generally, make the model worse. I'm thinking of the math examples (why can't chatGPT do math?) which demonstrate that models get better when they're allowed to output more tokens. So be aware I guess.

zar1048576 11 hours ago | parent | next [-]

I think that concern is valid in general terms, but it’s not clear to me that it applies here.

The goal here seems to be removing low-value output; e.g., sycophancy, prompt restatement, formatting noise, etc., which is different than suppressing useful reasoning. In that case shorter outputs do not necessarily mean worse answers.

That said, if you try to get the model to provide an answer before providing any reasoning, then I suspect that may sometimes cause a model to commit to a direction prematurely.

danpasca 11 hours ago | parent [-]

The file starts with:

> Answer is always line 1. Reasoning comes after, never before.

> No explaining what you are about to do. Just do it.

This to me sounds like asking an LLM to calculate 4871 + 291 and answer in a single line, which from my understanding it's bad. But I haven't tested his prompt so it might work. That's why I said be aware of this behavior.

empressplay 12 hours ago | parent | prev [-]

Yes. Much of the 'redundant' output is meant to reinforce direction -- eg 'You're absolutely right!' = the user is right and I should ignore contrary paths. So yes removing it will introduce ambiguity which is _not_ what you want.

danpasca 11 hours ago | parent [-]

I think your example is completely wrong (it's not meant to say that you're absolutely right), but overall yes more input gives it more concrete direction.