| ▲ | hhh 2 days ago |
| No. It used to be more even between datacenter and gaming for NVIDIA, but that's not been the case for a few years. Gaming has brought in less money than networking (mellanox) since '24 Q4. https://morethanmoore.substack.com/p/nvidia-2026-q2-financia... |
|
| ▲ | vlovich123 2 days ago | parent | next [-] |
| But the same thing that makes GPUs powerful at rendering is what AI needs - modern gaming GPUs are basically supercomputers that provide Hw and Sw to do programmable embarrassingly parallel work. That is modern game rendering but also AI and crypto (and various science engineering) which is the second revolution that Intel completely missed (the first one being mobile). |
| |
| ▲ | patagurbon 2 days ago | parent | next [-] | | AI (apparently) needs much lower precision in training and certainly in inference than gaming requires though. A very very large part of the die on modern datacenter GPUs is effectively useless for gaming | | |
| ▲ | vlovich123 2 days ago | parent [-] | | I disagree that HW blocks for lower precision take up that much die space. Data center GPUs are useless for gaming because it's tuned that way. H100 still has 24 raster operating units (4050 has 32) and 456 texture mapping units (4090 has 512). That's because there's only so much they can tune the HW architecture to one use-case or the other without breaking some fundamental architecture assumptions. And consumer cards still come with tensor units and support for lower precision. This is because the HW costs and unit economics are such that it's much more in favor of a unified architecture that scales to different workloads vs discrete implementations specific to a given market segment. They've also not bothered investing in SW to add the H100 to their consumer drivers to work well on games. That doesn't mean it's impossible and none of that takes away from the fact that H100 and consumer GPUs are much more similar and could theoretically be made to run the same workloads at comparable performance. |
| |
| ▲ | jlarocco 2 days ago | parent | prev [-] | | I don't think anybody is using gaming GPUs to do serious AI at this point, though. | | |
| ▲ | vlovich123 2 days ago | parent [-] | | But you can use a gaming card to do AI and you can use H100 to game. The architecture between them is quite similar. And I expect upcoming edge AI applications to break down and end up using GPUs more than having dedicated AI accelerator HW because A) you need something to do display anyway B) the fixed function DSPs that have been called "AI accelerators" are worse than useless for running LLMs. |
|
|
|
| ▲ | pjmlp 2 days ago | parent | prev [-] |
| Depends if one cares about a PlayStation/XBox like experience, or Switch like. |