Remix.run Logo
ShowalkKama 6 hours ago

the fact that more tokens = more smart should be expected given cot / thinking / other techniques that increase the model accuracy by using more tokens.

Did you test that ""caveman mode"" has similar performance to the ""normal"" model?

bitexploder 3 hours ago | parent | next [-]

That is part of it. They are also trained to think in very well mapped areas of their model. All the RHLF, etc. tuned on their CoT and user feedback of responses.

Garlef 6 hours ago | parent | prev [-]

Yes but: If the amount is fixed, then the density matters.

A lot of communication is just mentioning the concepts.