| ▲ | 0manrho 5 hours ago | |
People pulling their heads out of their ass as to how to actually deploy these systems at scale (AKA to do this effectively, you need to do more than just throw pallets of GPU's at it, such as properly considering Topologies of both NVMe-over-Fabric and PCIe roots/lanes [0]) combined with advances in various technologies (eg RDMA, CXL, cuDF/BaM/GPUD2S/etc) that meaningfully enhance how system ram can be integrated and leveraged are a big part of it. Also we're hitting that 5 years after DDR5 being readily available which means that a lot of existing enterprise hardware that was on DDR4 is going EOL and being replaced with DDR5 which, given many platforms these days have many more channels available than previously, results in more DRAM being bought than was previously used per node and in total. A lot of enterprise was still buying new DDR4 into 2023 as it was a more affordable way to deploy systems with lots of PCIe lanes which was more important than any the costs associated with the performance gain from DDR5 or related CPU's. (Also, early days DDR5 wasn't really any faster than DDR4 with how loose the timing was unless you were willing to pay a BIG premium) Regarding the hype of the day: AI specifically, part of it is the rise of wrappers and agents and inference in general that can run on CPU's/leverage system ram. These usecases aren't as sensitive to latency as the training side of things as the network latency from the remote user to the datacenter means latency hits due to hitting the CPU ringbus(infinity fabric, QPI, whatever you want to call it) results in a much less significant share over the overall overhead, and the cost/benefit/availability concerns there has also increased the demand for non-GPU AI compute and RAM. I wouldn't rule out corruption/price fixing (They've done it before) but I have no evidence of this. Wouldn't surprise me, but I don't think this is it (unless this problem persists for several quarters/years) There's some geopolitics and FOMO (Corporate keeping up with the joneses) and economics that goes into this as well but I can't really speculate on that specifically, that's not really my area of expertise. Suffice to say, it's kind of like a bank run where it's not so much that the demand itself hit the curve of the hockey stick, but it was gradually increasing until it hit a threshold that was starting to cause delays in delivery/deployments. Given how important many companies view being on the cutting edge here, this lead to sudden spike in volume customers willing to pay premiums for early delivery to hit deployment deadlines, artificially inflating demand and further constraining supply, which just fed back into that feedback loop pushing transient demand even higher. 0: Yes NVMe NAND flash is different than DRAM flash, but the systems/clusters that host the NVMe JBOD's tend to use lots of sysRAM for their index/metadata/"superhot" data layer (think memcached, Redis, the MDS nodes for Lustre, etc), and with the advent of CXL and SCM you can deploy even more DRAM to a cluster/fabric than what is strictly presented by the CPU/mobo's memory controllers/channels. This is not driving overall market volume, but is a source of fierce competition for supply at the very "top" of the DRAM/Flash market. TL;DR: Convergence of a lot of things driving demand. | ||