| ▲ | Bluecobra 2 hours ago |
| I didn’t realize that you can get 128GB of memory in a notebook, that is impressive! |
|
| ▲ | lambda an hour ago | parent | next [-] |
| I've got a 128 GiB unified memory Ryzen Ai Max+ 395 (aka Strix Halo) laptop. Trying to run LLM models somehow makes 128 GiB of memory feel incredibly tight. I'm frequently getting OOMs when I'm running models that are pushing the limits of what this can fit, I need to leave more memory free for system memory than I was expecting. I was expecting to be able to run models of up to ~100 GiB quantized, leaving 28 GiB for system memory, but it turns out I need to leave more room for context and overhead. ~80 GiB quantized seems like a better max limit when trying not running on a headless system so I'm running a desktop environment, browser, IDE, compilers, etc in addition to the model. And memory bandwidth limitations for running the models is real! 10B active parameters at 4-6 bit quants feels usable but slow, much more than that and it really starts to feel sluggish. So this can fit models like Qwen3.5-122B-A10B but it's not the speediest and I had to use a smaller quant than expected. Qwen3-Coder-Next (80B/3B active) feels quite on speed, though not quite as smart. Still trying out models, Nemotron-3-Super-120B-A12B just came out, but looks like it'll be a bit slower than Qwen3.5 while not offering up any more performance, though I do really like that they have been transparent in releasing most of its training data. |
| |
| ▲ | zozbot234 30 minutes ago | parent [-] | | There's some very recent ongoing work in some local AI frameworks on enabling mmap by default, which can potentially obviate some RAM-driven limitations especially for sparse MoE models. Running with mmap and too little RAM will then still come with severe slowdowns since read-only model parameters will have to be shuttled in from storage as they're needed, but for hardware with fast enough storage and especially for models that "almost" fit in the RAM filesystem cache, this can be a huge unblock at negligible cost. Especially if it potentially enables further unblocks via adding extra swap for K-V cache and long context. |
|
|
| ▲ | AzN1337c0d3r an hour ago | parent | prev [-] |
| Most workstation class laptops (i.e. Lenovo P-series, Dell Precision) have 4 DIMM slots and you can get them with 256 GB (at least, before the current RAM shortages). There's also the Ryzen AI Max+ 395 that has 128GB unified in laptop form factor. Only Apple has the unique dynamic allocation though. |
| |
| ▲ | the_pwner224 an hour ago | parent | next [-] | | Yep, I have a 13" gaming tablet with the 128 GB AMD Strix Halo chip (Ryzen AI Max+ 395, what a name). Asus ROG Flow Z13. It's a beast; the performance is totally disproportionate to its size & form factor. I'm not sure what exactly you're referring to with "Only Apple has the unique dynamic allocation though." On Strix Halo you set the fixed VRAM size to 512 MB in the BIOS, and you set a few Linux kernel params that enable dynamic allocation to whatever limit you want (I'm using 110 GB max at the moment). LLMs can use up to that much when loaded, but it's shared fully dynamically with regular RAM and is instantly available for regular system use when you unload the LLM. | | |
| ▲ | wilkystyle a minute ago | parent [-] | | What operating system are you using? I was looking at this exact machine as a potential next upgrade. |
| |
| ▲ | lambda an hour ago | parent | prev [-] | | > Only Apple has the unique dynamic allocation though. What do you mean? On Linux I can dynamically allocate memory between CPU and GPU. Just have to set a few kernel parameters to set the max allowable allocation to the GPU, and set the BIOS to the minimum amount of dedicated graphics memory. | | |
| ▲ | AzN1337c0d3r 31 minutes ago | parent [-] | | Maybe things have changed but the last time I looked at this, it was only max 96GB to the GPU. And it isn't dynamic in the sense you still have to tweak the kernel parameters, which require a reboot. Apple has none of this. | | |
| ▲ | the_pwner224 23 minutes ago | parent | next [-] | | Strix Halo you can get at least 115 GB to the GPU (out of 128 GB total), I've tested that configuration. I don't have any reason to believe you couldn't use 120+ GB of VRAM. Setting the kernel params is a one-time initial setup thing. You have 128 GB of RAM, set it to 120 or whatever as the max VRAM. The LLM will use as much as it needs and the rest of the system will use as much it needs. Fully dynamic with real-time allocation of resources. Honestly I literally haven't even thought of it after setting those kernel args a while ago. So: "options ttm.pages_limit=31457280 ttm.page_pool_size=31457280", reboot, and that's literally all you have to do. Oh and even that is only needed because the AMD driver defaults it to something like 35-48 GB max VRAM allocation. It is fully dynamic out of the box, you're only configuring the max VRAM quota with those params. I'm not sure why they choice that number for the default. | |
| ▲ | lambda 19 minutes ago | parent | prev [-] | | You do have to set the kernel parameters once to set the max GPU allocation, I have it set to 110 GiB, and you have to set a BIOS setting to set the minimum GPU allocation, I have it set to 512 MiB. Once you've set those up, it's dynamic within those constraints, with no more reboots required. On Windows, I think you're right, it's max 96 GiB to the GPU and it requires a reboot to change it. |
|
|
|