Remix.run Logo
great_psy 3 hours ago

Is there any provided reason from anthropic why they changed the tokenizer ?

Is there a quality increase from this change or is it a money grab ?

Aurornis 2 hours ago | parent | next [-]

The tokenizer is an important part of overall model training and performance. It’s only one piece of the overall cost per request. If a tokenizer that produces more tokens also leads to a model that gets to the correct answer more quickly and requires fewer re-prompts because it didn’t give the right answer, the overall cost can still be lower.

Comparisons are still ongoing but I have already seen some that suggest that Opus 4.7 might on average arrive at the answer with fewer tokens spent, even with the additional tokenizer overhead.

So, no, not a money grab.

ChadNauseam an hour ago | parent | prev [-]

How would it be a money grab? If the new tokenizer requires more tokens to encode the same information, it costs them more money for inference. The point of charging per token is that the cost is proportional to the number of tokens. That's my understanding anyway

abrookewood an hour ago | parent [-]

Because everyone burns through their limits much faster, forcing them to upgrade to higher limits or new tiers.