| |
| ▲ | jstummbillig 7 hours ago | parent [-] | | You are paying per token, but what you care about is token efficiency. If token efficiency has improved by as much as they claim it did (i.e. you need less tokens to complete a task successfully) all seems well. | | |
| ▲ | mangolie 7 hours ago | parent | next [-] | | Not for coding because it actually needs to read and write large files | | |
| ▲ | baalimago 7 hours ago | parent | next [-] | | Well, sort of. Imagine the case where it first scans the repo, then "intelligently" creates architecture files describing the project.
The level of intelligence will create a varying quality of summary, with varying need of deep-scans on subsequent sessions. Level of intelligence will also increase comprehension of these architecture files. Same principle applies when designing plans for complex tasks, etc. Token amount to grasp a concept is what matters. | |
| ▲ | jstummbillig 7 hours ago | parent | prev [-] | | Tbf, I have not super kept track of what is actually happening inside the "thinking" portion of recent releases. But last time I checked there still was a lot of verbosity and mistakes, that beat the actual amount of required, usable code generation by a wide margin. |
| |
| ▲ | cbg0 7 hours ago | parent | prev [-] | | If it uses half the tokens to complete a task, then doubling the cost is perfectly fine. But is that actually true? | | |
| ▲ | 2001zhaozhao 7 hours ago | parent | next [-] | | This happens with every new model release though. The model makes less mistakes and spends less time fixing them, resulting in a token usage reduction for the same difficulty of task. Almost any task other than straight boilerplate will benefit from this. In the same vein, I would guess that Opus 4.7 is probably cheaper for most tasks than 4.6, even though the tokenizer uses more tokens for the same length of string. | | |
| ▲ | jorl17 7 hours ago | parent | next [-] | | Maybe you'll have better luck but our team just cannot use Opus 4.7. Some say it goes off on endless tangents, others that it doesn't work enough. Personally, it acts, talks, and makes mistakes like GPT models, for a much more exorbitant price. Misses out on important edge cases, doesn't get off its ass to do more than the bare minimum I asked (I mention an error and it fixes that error and doesn't even think to see if it exists elsewhere and propose fixing it there). I've slowly been moving to GPT5.4-xhigh with some skills to make it act a bit more like Opus 4.6, in case the latter gets discontinued in favour of Opus 4.7. | |
| ▲ | cbg0 7 hours ago | parent | prev [-] | | Doesn't look like it's cheaper, better or uses fewer tokens: https://www.reddit.com/r/Anthropic/comments/1stf6fz/one_week... YMMV, I know. |
| |
| ▲ | jstummbillig 7 hours ago | parent | prev [-] | | We'll find out! |
|
|
|