▲ | dahart 5 days ago | |
What makes you think that? It appears most of this material came straight out of NVIDIA documentation. What do you think is missing? I just checked and found the H100 diagram for example is copied (without being correctly attributed) from the H100 whitepaper: https://resources.nvidia.com/en-us-hopper-architecture/nvidi... Much of the info on compute and bandwidth is from that and other architecture whitepapers, as well as the CUDA C++ programming guide, which covers a lot of what this article shares, in particular chapters 5, 6, and 7. https://docs.nvidia.com/cuda/cuda-c-programming-guide/ There’s plenty of value in third parties distilling and having short form versions, and of writing their own takes on this, but this article wouldn’t have been possible without NVIDIA’s docs, so the speculation, FUD and shade is perhaps unjustified. |