| ▲ | lukeschlather 2 hours ago | |
This feels a lot like the RISC/CISC debate. More academic than it seems. Nvidia is designing their GPUs primarily to do exactly the same tasks TPUs are doing right now. Even within Google it's probably hard to tell whether or not it matters on a 5-year timeframe. It certainly gives Google an edge on some things, but in the fullness of time "GPUs" like the H100 are primarily used for running tensor models and they're going to have hardware that is ruthlessly optimized for that purpose. And outside of Google this is a very academic debate. Any efficiency gains over GPUs will primarily turn into profit for Google rather than benefit for me as a developer or user of AI systems. Since Google doesn't sell TPUs, they are extremely well-positioned to ensure no one else can profit from any advantages created by TPUs. | ||
| ▲ | turtletontine 2 hours ago | parent [-] | |
> Since Google doesn't sell TPUs, they are extremely well-positioned to ensure no one else can profit from any advantages created by TPUs. First part is true at the moment, not sure the second follows. Microsoft is developing their own “Maia” chips for running AI on Azure with custom hardware, and everyone else is also getting in the game of hardware accelerators. Google is certainly ahead of the curve in making full-stack hardware that’s very very specialized for machine learning. But everyone else is moving in the same direction: lots of action is in buying up other companies that make interconnects and fancy networking equipment, and AMD/NVIDIA continue to hyper specialize their data center chips for neural networks. Google is in a great position, for sure. But I don’t see how they can stop other players from converging on similar solutions. | ||