Remix.run Logo
gpm 5 hours ago

It strikes me that more tokens likely give the LLM more time/space to "think". Also that more redundant tokens, like local type declarations instead of type inference from far away, likely often reduce the portion of the code LLMs (and humans) have to read.

So I'm not convinced this is either the right metric, or even if you got the right metric that it's a metric you want to minimize.

limoce 5 hours ago | parent | next [-]

I think separating thinking tokens from "representing" tokens might be a better approach, like what those thinking models does

make3 3 hours ago | parent | prev [-]

With Chain of Thoughts (text thinking), the models can already use as much compute as they want in any language (determined by reinforcement learning training)

gpm 3 hours ago | parent [-]

I'm not convinced that thinking tokens - which sort of have to serve a specific chain of thought purpose - are interchangeable with input tokens during which give the model compute without having it add new text.

For a very imperfect human analogy, it feels like saying "a student can spend as much time thinking about the text as they want, so the textbook can be extremely terse".

Definitely just gut feelings though - not well tested or anything. I could be wrong.