| |
| ▲ | coder543 7 hours ago | parent [-] | | $10k gets you a Mac Studio with 512GB of RAM, which definitely can run GLM-4.7 with normal, production-grade levels of quantization (in contrast to the extreme quantization that some people talk about). The point in this thread is that it would likely be too slow due to prompt processing. (M5 Ultra might fix this with the GPU's new neural accelerators.) | | |
| ▲ | embedding-shape 5 hours ago | parent | next [-] | | > $10k gets you a Mac Studio with 512GB of RAM, which definitely can run GLM-4.7 with normal, production-grade levels of quantization (in contrast to the extreme quantization that some people talk about). Please do give that a try and report back the prefill and decode speed. Unfortunately, I think again that what I wrote earlier will apply: > In practice, it'll be incredible slow and you'll quickly regret spending that much money on it I'd rather place that 10K on a RTX Pro 6000 if I was choosing between them. | | |
| ▲ | rynn 4 hours ago | parent | next [-] | | > Please do give that a try and report back the prefill and decode speed. M4 Max here w/ 128GB RAM. Can confirm this is the bottleneck. https://pastebin.com/2wJvWDEH I weighed about a DGX Spark but thought the M4 would be competitive with equal RAM. Not so much. | | |
| ▲ | cmrdporcupine 4 hours ago | parent [-] | | I think the DGX Spark will likely underperform the M4 from what I've read. However it will be better for training / fine tuning, etc. type workflows. | | |
| ▲ | rynn 3 hours ago | parent [-] | | > I think the DGX Spark will likely underperform the M4 from what I've read. For the DGX benchmarks I found, the Spark was mostly beating the M4. It wasn't cut and dry. | | |
| ▲ | coder543 3 hours ago | parent [-] | | The Spark has more compute, so it should be faster for prefill (prompt processing). The M4 Max has double the memory bandwidth, so it should be faster for decode (token generation). |
|
|
| |
| ▲ | coder543 5 hours ago | parent | prev [-] | | > I'd rather place that 10K on a RTX Pro 6000 if I was choosing between them. One RTX Pro 6000 is not going to be able to run GLM-4.7, so it's not really a choice if that is the goal. | | |
| ▲ | bigyabai 4 hours ago | parent [-] | | You definitely could, the RTX Pro 6000 has 96 (!!!) gigs of memory. You could load 2 experts at once at an MXFP4 quant, or one expert at FP8. | | |
| ▲ | coder543 4 hours ago | parent [-] | | No… that’s not how this works. 96GB sounds impressive on paper, but this model is far, far larger than that. If you are running a REAP model (eliminating experts), then you are not running GLM-4.7 at that point — you’re running some other model which has poorly defined characteristics. If you are running GLM-4.7, you have to have all of the experts accessible. You don’t get to pick and choose. If you have enough system RAM, you can offload some layers (not experts) to the GPU and keep the rest in system RAM, but the performance is asymptotically close to CPU-only. If you offload more than a handful of layers, then the GPU is mostly sitting around waiting for work. At which point, are you really running it “on” the RTX Pro 6000? If you want to use RTX Pro 6000s to run GLM-4.7, then you really need 3 or 4 of them, which is a lot more than $10k. And I don’t consider running a 1-bit superquant to be a valid thing here either. Much better off running a smaller model at that point. Quantization is often better than a smaller model, but only up to a point which that is beyond. | | |
| ▲ | bigyabai 3 hours ago | parent [-] | | You don't need a REAP-processed model to offload on a per-expert basis. All MoE models are inherently sparse, so you're only operating on a subset of activated layers when the prompt is being processed. It's more of a PCI bottleneck than a CPU one. > And I don’t consider running a 1-bit superquant to be a valid thing here either. I don't either. MXFP4 is scalar. | | |
| ▲ | coder543 3 hours ago | parent [-] | | Yes, you can offload random experts to the GPU, but it will still be activating experts that are on the CPU, completely tanking performance. It won't suddenly make things fast. One of these GPUs is not enough for this model. You're better off prioritizing the offload of the KV cache and attention layers to the GPU than trying to offload a specific expert or two, but the performance loss I was talking about earlier still means you're not offloading enough for a 96GB GPU to make things how they need to be. You need multiple, or you need a Mac Studio. If someone buys one of these $8000 GPUs to run GLM-4.7, they're going to be immensely disappointed. This is my point. | | |
|
|
|
|
| |
| ▲ | benjiro 6 hours ago | parent | prev | next [-] | | > $10k gets you a Mac Studio with 512GB of RAM Because Apple has not adjusted their pricing yet for the new ram pricing reality. The moment they do, its not going to be a $10k system anymore but in the $15k+... The amount of wafers going to AI is insane and will influence not just memory prices. Do not forget, the only reason why Apple is currently immunity to this, is because they tend to make long term contracts but the moment those expire ... then will push the costs down consumers. | | | |
| ▲ | 5 hours ago | parent | prev [-] | | [deleted] |
|
|