▲ | themafia 4 days ago | ||||||||||||||||
I think you'd want to go the other way. GPU RAM is high speed and power hungry. So there tends to not be very much of it on the GPU card. This is part of the reason we keep increasing the bandwidth is so the CPU can touch that GPU RAM at the highest speeds. It makes me wonder though if a NUMA model for the GPU is a better idea. Add more lower power and lower speed RAM onto the GPU card. Then let the CPU preload as much data as is possible onto the card. Then instead of transferring textures through the CPU onto the PCI bus and into the GPU why not just send a DMA request to the GPU and ask it to move it from it's low speed memory to it's high speed memory? It's a whole new architecture but it seems to get at the actual problems we have in the space. | |||||||||||||||||
▲ | kokada 4 days ago | parent [-] | ||||||||||||||||
Isn't that what you described Direct Storage? | |||||||||||||||||
|