Remix.run Logo
villgax 6 hours ago

100 times more chips for equivalent memory, sure.

m4r1k 5 hours ago | parent | next [-]

Check the specs again. Per chip, TPU 7x has 192GB of HBM3e, whereas the NVIDIA B200 has 186GB.

While the B200 wins on raw FP8 throughput (~9000 vs 4614 TFLOPs), that makes sense given NVIDIA has optimized for the single-chip game for over 20 years. But the bottleneck here isn't the chip—it's the domain size.

NVIDIA's top-tier NVL72 tops out at an NVLink domain of 72 Blackwell GPUs. Meanwhile, Google is connecting 9216 chips at 9.6Tbps to deliver nearly 43 ExaFlops. NVIDIA has the ecosystem (CUDA, community, etc.), but until they can match that interconnect scale, they simply don't compete in this weight class.

cwzwarich 3 hours ago | parent | next [-]

Isn’t the 9000 TFLOP/s number Nvidia’s relatively useless sparse FLOP count that is 2x the actual dense FLOP count?

PunchyHamster 2 hours ago | parent | prev [-]

Yet everyone uses NVIDIA and Google is at catchup position.

Ecosystem is MASSIVE factor and will be a massive factor for all but the biggest models

epolanski an hour ago | parent [-]

Catch-up in what exactly? Google isn't building hardware to sell, they aren't in the same market.

Also I feel you completely misunderstand that the problem isn't how fast is ONE gpu vs ONE tpu, what matters is the costs for the same output. If I can fill a datacenter at half the cost for the same output, does it matters I've used twice the TPUs and that a single Nvidia Blackwell was faster? No...

And hardware cost isn't even the biggest problem, operational costs, mostly power and cooling are another huge one.

So if you design a solution that fits your stack (designed for it) and optimize for your operational costs you're light years ahead of your competition using the more powerful solution, that costs 5 times more in hardware and twice in operational costs.

All I say is more or less true for inference economics, have no clue about training.

butvacuum an hour ago | parent [-]

Also, isn't memory a bit moot? At scale I thought that the ASICs frequently sat idle waiting for memory.

pests 9 minutes ago | parent [-]

You're doing operations on the memory once it's been transferred to gpu memory. Either shuffling it around various caches or processors or feeding it into tensor cores or other matrix operations. You don't want to be sitting idle.

croon 6 hours ago | parent | prev | next [-]

Ironwood is 192GB, Blackwell is 96GB, right? Or am i missing something?

NaomiLehman 6 hours ago | parent | prev [-]

I think it's not about the cost but the limits of quickly accessible RAM