▲ | EnPissant 3 days ago | |||||||
I think the Mac Studio is a poor fit for gpt-oss-120b. On my 96 GB DDR5-6000 + RTX 5090 box, I see ~20s prefill latency for a 65k prompt and ~40 tok/s decode, even with most experts on the CPU. A Mac Studio will decode faster than that, but prefill will be 10s of times slower due to much lower raw compute vs a high-end GPU. For long prompts that can make it effectively unusable. That’s what the parent was getting at. You will hit this long before 65k context. If you have time, could you share numbers for something like: llama-bench -m <path-to-gpt-oss-120b.gguf> -ngl 999 -fa 1 --mmap 0 -p 65536 -b 4096 -ub 4096 Edit: The only Mac Studio pp65536 datapoint I’ve found is this Reddit thread: https://old.reddit.com/r/LocalLLaMA/comments/1jq13ik/mac_stu ... They report ~43.2 minutes prefill latency for a 65k prompt on a 2-bit DeepSeek quant. Gpt-oss-120b should be faster than that, but still very slow. | ||||||||
▲ | int_19h 3 days ago | parent [-] | |||||||
This is Mac Studio M1 Ultra with 128Gb of RAM.
| ||||||||
|