| ▲ | tymscar a day ago | ||||||||||||||||
DDR5 is a couple of orders of magnitude slower than really good vram. That’s one big reason. | |||||||||||||||||
| ▲ | zrm 20 hours ago | parent | next [-] | ||||||||||||||||
DDR5 is ~8GT/s, GDDR6 is ~16GT/s, GDDR7 is ~32GT/s. It's faster but the difference isn't crazy and if the premise was to have a lot of slots then you could also have a lot of channels. 16 channels of DDR5-8200 would have slightly more memory bandwidth than RTX 4090. | |||||||||||||||||
| |||||||||||||||||
| ▲ | dawnerd a day ago | parent | prev | next [-] | ||||||||||||||||
But it would still be faster than splitting the model up on a cluster though, right? But I’ve also wondered why they haven’t just shipped gpus like cpus. | |||||||||||||||||
| |||||||||||||||||
| ▲ | cogman10 a day ago | parent | prev [-] | ||||||||||||||||
For AI, really good isn't really a requirement. If a middle ground memory module could be made, then it'd be pretty appealing. | |||||||||||||||||