Remix.run Logo
xnx 3 days ago

> AI companies are constrained by what fits in this generation of hardware, and waiting for the next generation to become available.

Does this apply to Google that is using custom built TPUs while everyone else uses stock Nvidia?

ACCount37 3 days ago | parent [-]

By all accounts, what's in Google's racks right now (TPU v5e, v6e) is vaguely H100-adjacent, in both raw performance and supported model size.

If Google wants anything better than that? They, too, have to wait for the new hardware to arrive. Chips have a lead time - they may be your own designs, but you can't just wish them into existence.

xxpor 3 days ago | parent [-]

Aren't chips + memory constrained by process + reticle size? And therefore, how much HBM you can stuff around the compute chip? I'd expect everyone to more or less support the same model size at the same time because of this, without a very fundamentally different architecture.