Remix.run Logo
radarsat1 5 days ago

Why haven't Nvidia developed a TPU yet?

dist-epoch 5 days ago | parent | next [-]

This article suggests they sort of did: 90% of the flops is in matrix multiplication units.

They leave some performance on the table, but they gain flexible compilers.

Philpax 5 days ago | parent | prev | next [-]

They don't need to. Their hardware and programming model are already dominant, and TPUs are harder to program for.

HarHarVeryFunny 5 days ago | parent | prev [-]

Meaning what? Something less flexible? Less CUDA cores and more Tensor Cores?

The majority of NVidia's profits (almost 90%) do come from data center, most of which is going to be neural net acceleration, and I'd have to assume that they have optimized their data center products to maximize performance for typical customer workloads.

I'm sure that Microsoft would provide feedback to Nvidia if they felt changes were needed to better compete with Google in the cloud compute market.

cwmoore 4 days ago | parent [-]

> most of which is going to be neural net acceleration

is it?

HarHarVeryFunny 4 days ago | parent [-]

I've got to assume so, since data center revenue growth seems to have grown in sync with recent growth in AI adoption. CUDA has been around for a long time, so it would seem highly coincidental if non-AI CUDA usage was only just now surging at same time as AI usage is taking off, and new data center build announcements seem to invariably be linked to AI.