| ▲ | kogold 6 hours ago |
| Let me rephrase that for you: "Interesting idea! Token consumption sure is an issue that should be addressed, and this is pretty funny too!
However, I happen to have an unproven claim that tokens are units of thinking, and therefore, reducing the token count might actually reduce the model's capabilities. Did anybody using this by chance notice any degradation (since I did not bother to check myself)?" Have a nice day! |
|
| ▲ | Chance-Device 5 hours ago | parent | next [-] |
| Let’s see, I think these pretty much map out a little chronology of the research: https://arxiv.org/abs/2112.00114
https://arxiv.org/abs/2406.06467
https://arxiv.org/abs/2404.15758
https://arxiv.org/abs/2512.12777 First that scratchpads matter, then why they matter, then that they don’t even need to be meaningful tokens, then a conceptual framework for the whole thing. |
| |
| ▲ | bsza 4 hours ago | parent [-] | | I dont’t see the relevance, the discussion is over whether boilerplate text that occurs intermittently in the output purely for the sake of linguistic correctness/sounding professional is of any benefit. Chain of thought doesn’t look like that to begin with, it’s a contiguous block of text. | | |
| ▲ | Chance-Device 3 hours ago | parent | next [-] | | To boil it down: chain of thought isn’t really chain of thought, it’s just more token generation output to the context. The tokens are participating in computations in subsequent forward passes that are doing things we don’t see or even understand. More LLM generated context matters. | |
| ▲ | bitexploder 4 hours ago | parent | prev | next [-] | | That is not how CoT works. It is all in context. All influenced by context. This is a common and significant misunderstanding of autoregressive models and I see it on HN a lot. | |
| ▲ | j16sdiz 4 hours ago | parent | prev [-] | | I don't see the relevance -- and casually dismiss years of researches without even trying to read those paper. |
|
|
|
| ▲ | bitexploder 4 hours ago | parent | prev | next [-] |
| That "unproven claim" is actually a well-established concept called Chain of Thought (CoT). LLMs literally use intermediate tokens to "think" through problems step by step. They have to generate tokens to talk to themselves, debug, and plan. Forcing them to skip that process by cutting tokens, like making them talk in caveman speak, directly restricts their ability to reason. |
|
| ▲ | ShowalkKama 6 hours ago | parent | prev | next [-] |
| the fact that more tokens = more smart should be expected given cot / thinking / other techniques that increase the model accuracy by using more tokens. Did you test that ""caveman mode"" has similar performance to the ""normal"" model? |
| |
| ▲ | bitexploder 3 hours ago | parent | next [-] | | That is part of it. They are also trained to think in very well mapped areas of their model. All the RHLF, etc. tuned on their CoT and user feedback of responses. | |
| ▲ | Garlef 6 hours ago | parent | prev [-] | | Yes but: If the amount is fixed, then the density matters. A lot of communication is just mentioning the concepts. |
|
|
| ▲ | 5 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | ano-ther 4 hours ago | parent | prev | next [-] |
| Looking at the skill.md wouldn’t this actually increase token use since the model now needs to reformat its output? Funny idea though. And I’d like to see a more matter-of-fact output from Claude. |
|
| ▲ | mynegation 6 hours ago | parent | prev | next [-] |
| No, let me rephrase it for you. “tokens used for think. Short makes model dumb” |
| |
| ▲ | freehorse 5 hours ago | parent [-] | | Talk a lot not same as smart | | |
| ▲ | taneq 4 hours ago | parent [-] | | Think before talk better though | | |
| ▲ | freehorse 3 hours ago | parent [-] | | Think makes smart. But think right words makes smarter, not think more words. Smart is elucidate structure and relationships with right words. | | |
| ▲ | ben_w 12 minutes ago | parent [-] | | think make smart, llm approximate "think" with context, llm not smart ever but sometimes less dumb with more word |
|
|
|
|
|
| ▲ | huflungdung 4 hours ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | estearum 5 hours ago | parent | prev [-] |
| Can't you know that tokens are units of thinking just by... like... thinking about how models work? |
| |
| ▲ | gchamonlive 5 hours ago | parent | next [-] | | Can't you just know that the earth is the center of the world by... like... just looking at how the world works? | | |
| ▲ | estearum 4 hours ago | parent [-] | | Actually you'd trivially disprove that claim if you're starting from mechanistic knowledge of how orbits work, like how we have mechanistic knowledge of how LLMs work. | | |
| ▲ | gchamonlive 4 hours ago | parent [-] | | You have empirical observations, like replicating a fixed set of inner layers to make it think longer, or that you seem to have encode and decode layers. But exactly why those layers are the way they are, how they come together for emergent behaviour... Do we have mechanistic knowledge of that? | | |
| ▲ | ben_w 7 minutes ago | parent | next [-] | | I think we've *only* got the mechanism, not the implications. Compare with fluid dynamics; it's not hard to write down the Navier–Stokes equations, but there's a million dollars available to the first person who can prove or give a counter-example of the following statement: In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.
- https://en.wikipedia.org/wiki/Navier–Stokes_existence_and_sm... | |
| ▲ | xpe 3 hours ago | parent | prev [-] | | Though the above exchange felt a tiny bit snarky, I think the conversation did get more interesting as it went on. I genuinely think both people could probably gain by talking more -- or at least figuring out a way to move fast the surface level differences. Yes, humans designed LLMs. But this doesn't mean we understand their implications even at this (relatively simple) level. |
|
|
| |
| ▲ | xpe 4 hours ago | parent | prev [-] | | > Can't you know that tokens are units of thinking just by... like... thinking about how models work? Seems reasonable, but this doesn't settle probably-empirical questions like: (a) to what degree is 'more' better?; (b) how important are filler words? (c) how important are words that signal connection, causality, influence, reasoning? | | |
| ▲ | estearum 4 hours ago | parent [-] | | Right, there's probably something more subtle like "semantic density within tokens is how models think" So it's probably true that the "Great question!---" type preambles are not helpful, but that there's definitely a lower bound on exactly how primitive of a caveman language we're pushing toward. |
|
|