Remix.run Logo
louiereederson 8 hours ago

For a 56.7 score on the Artificial Intelligence Index, GPT 5.5 used 22m output tokens. For a score of 57, Opus 4.7 used 111m output tokens.

The efficiency gap is enormous. Maybe it's the difference between GB200 NVL72 and an Amazon Tranium chip?

swyx 8 hours ago | parent | next [-]

why would chip affect token quantity. this is all models.

louiereederson 7 hours ago | parent [-]

Chip costs strongly impact the economics of model serving.

It is entirely plausible to me that Opus 4.7 is designed to consume more tokens in order to artificially reduce the API cost/token, thereby obscuring the true operating cost of the model.

I agree though, I chose poor phrasing originally. Better to say that GB200 vs Tranium could contribute to the efficiency differential.

karmasimida 8 hours ago | parent | prev | next [-]

Chips doesn’t impact output quality in this magnitude

ChrisGreenHeur 8 hours ago | parent [-]

True, but the qualifying the power played a large part. Most likely nuclear power for this high quality token efficiency.

AtNightWeCode 4 hours ago | parent | prev | next [-]

You need to compare total cost. Token count is irrelevant.

dist-epoch 6 hours ago | parent | prev [-]

If it's a new pretrain, the token embeddings could be wider - you can pack more info into a token making it's way through the system.

Like Chinese versus English - you need fewer Chinese characters to say something than if you write that in English.

So this model internally could be thinking in much more expressive embeddings.