| ▲ | jl6 an hour ago |
| Can someone explain why OpenAI is buying DDR5 RAM specifically? I thought LLMs typically ran on GPUs with specialised VRAM, not on main system memory. Have they figured out how to scale using regular RAM? |
|
| ▲ | fullstop an hour ago | parent | next [-] |
| They're not. They are buying wafers / production capacity to make HBM so there is less DDR5 supply. |
| |
| ▲ | jl6 an hour ago | parent [-] | | OK, fair enough, but what are OpenAI doing buying production capacity rather than, say, paying NVIDIA to do it? OpenAI aren’t the ones making the hardware? | | |
| ▲ | baq a few seconds ago | parent | next [-] | | > OpenAI aren’t the ones making the hardware? how surprised would you be if they announced that they are? | |
| ▲ | snuxoll 43 minutes ago | parent | prev | next [-] | | Just because Nvidia happily sells people discrete GPU's, DGX systems, etc., doesn't mean they would turn down a company like OpenAI paying them $$$ for just the packaged chips and the technical documentation to build their own PCBs; or, let OpenAI provide their own DRAM supply for production on an existing line. If you have a potentially multi-billion dollar contract, most businesses will do things outside of their standard product offerings to take in that revenue. | | | |
| ▲ | fullstop an hour ago | parent | prev [-] | | Because they can provide the materials to NVIDIA for production and prevent Google, Anthropic, etc from having them. |
|
|
|
| ▲ | Thegn an hour ago | parent | prev | next [-] |
| They didn't buy DDR5 - they bought raw wafer capacity and a ton of it at that. |
|
| ▲ | an hour ago | parent | prev [-] |
| [deleted] |