| ▲ | fc417fc802 2 days ago | |
Nope, the reserved memory is what's available to use from the various APIs (VK, GL, etc). More recently there's OS support for flexible on demand allocation by the GPU driver. Of course the APIs have allowed you to make direct use of pointers to CPU memory for something like a decade. However that requires maintaining two separate code paths because doing so while running on a dGPU is _extremely_ expensive. | ||
| ▲ | kimixa 2 days ago | parent [-] | |
As someone that's worked on GPU drivers for shared memory systems for over 15 years, supporting hardware that was put on the market over 20 years ago, and they've "always" (in my experience) been able to dynamically assign memory pages to the GPU. The "reserved" memory is more about the guaranteed minimum to allow the thing to actually light up, and sometimes specific hardware blocks had more limited requirements (e.g. the display block might require contiguous physical addresses, or the MMU data/page tables themselves) so we would reserve a chunk to ensure they can actually be allocated with those requirements. But they tended to be a small proportion of the total "GPU Memory used". Sure, sharing the virtual address space is less well supported, but the total amount of memory the GPU can use is flexible at runtime. | ||