Remix.run Logo
mrbungie 5 hours ago

It's fun when then you read last Nvidia tweet [1] suggesting that still their tech is better, based on pure vibes as anything in the (Gen)AI-era.

[1] https://x.com/nvidianewsroom/status/1993364210948936055

qcnguy 19 minutes ago | parent | next [-]

Not vibes. TPUs have fallen behind or had to be redesigned from scratch many times as neural architectures and workloads evolved, whereas the more general purpose GPUs kept on trucking and building on their prior investments. There's a good reason so much research is done on Nvidia clusters and not TPU clusters. TPU has often turned out to be over-specialized and Nvidia are pointing that out.

pests 13 minutes ago | parent [-]

You say that like I d a bad thing. Nvidia architectures keep changing and getting more advanced as well, with specialized tensor operations, different accumulators and caches, etc. I see no issue with progress.

bigyabai an hour ago | parent | prev | next [-]

> based on pure vibes

The tweet gives their justification; CUDA isn't ASIC. Nvidia GPUs were popular for crypto mining, protein folding, and now AI inference too. TPUs are tensor ASICs.

FWIW I'm inclined to agree with Nvidia here. Scaling up a systolic array is impressive but nothing new.

almostgotcaught 3 hours ago | parent | prev [-]

> NVIDIA is a generation ahead of the industry

a generation is 6 months

wmf 3 hours ago | parent [-]

For GPUs a generation is 1-2 years.

almostgotcaught 2 hours ago | parent [-]

no https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...

Arainach 2 hours ago | parent [-]

What in that article makes you think a generation is shorter?

* Turing: September 2018

* Ampere: May 2020

* Hopper: March 2022

* Lovelace (designed to work with Hopper): October 2022

* Blackwell: November 2024

* Next: December 2025 or later

With a single exception for Lovelace (arguably not a generation), there are multiple years between generations.