Remix.run Logo
HarHarVeryFunny 4 hours ago

The various AI accelerator chips, such as TPUs and NVidia GPUs, are only compatible to extent that some of the high level tools like PyTorch and Triton (kernel compiler) may support both, which is like saying that x86 and ARM chips are compatible since gcc supports them both as targets, but note this does not mean that you can take a binary compiled for ARM and run it on an x86 processor.

For these massive, and expensive to train, AI models the differences hit harder since at the kernel level, where the pedal hits the metal, they are going to be wringing every last dollar of performance out of the chips by writing hand optimized kernels for them, highly customized to the chip's architecture and performance characteristics. It may go deeper than that too, with the detailed architecture of the models themselves tweaked to best perform on a specific chip.

So, bottom line is that you can't just take a model "compiled to run on TPUs", and train it on NVidia chips just because you have spare capacity there.

moralestapia 3 hours ago | parent [-]

>but they are also buying as many Nvidia chips as they can get their hands on

>But is Google buying those GPU chips for their own use

>google buys nvidia GPUs for cloud, I don't think they use them much or at all internally.

We're not talking about GPUs.