| ▲ | rvz 9 hours ago | ||||||||||||||||
The technical write up is great, but Mac users should not get too excited just yet on running 300B+ parameter models locally as the TPS isn't that good. >...at 4.4+ tokens/second That is even when it is using 4-bit quantization and it is still at that speed. > The entire 209GB model streams from SSD through a custom Metal compute pipeline. This is my main problem. If I were to run this on a Mac SSD, 24/7 for heavy usage such as Openclaw, that is going to significantly reduce the lifetime of the SSD. Can't imagine using this in the long term right now, but improvements will follow. Still a great write up anyways. | |||||||||||||||||
| ▲ | Roxxik 9 hours ago | parent | next [-] | ||||||||||||||||
Does an SSD meaningfully degrade by read only workloads? | |||||||||||||||||
| |||||||||||||||||
| ▲ | etiam 9 hours ago | parent | prev | next [-] | ||||||||||||||||
> If I were to run this on a Mac SSD, 24/7 for heavy usage such as Openclaw, that is going to significantly reduce the lifetime of the SSD. How sure are you about that? I've never looked closer at how a large LLM with mixture of experts architecture switches between expert modules, but staying on roughly the same topic for the use (as it often would when editing the same codebase), I wouldn't be surprised to see the switches of composition are fairly rare, fairly small, and to the extent it happens it's repeated reads from the flash disk rather than writes it tends to cause. | |||||||||||||||||
| |||||||||||||||||
| ▲ | Wowfunhappy 8 hours ago | parent | prev | next [-] | ||||||||||||||||
Eh. I mean, 4 tokens a second works fine if you're patient. Go do something else while you wait. I feel like whenever I'm trying to find information on which local models will work on my hardware, I have to overestimate because people don't know how to wait for things. Also, reading data doesn't cause SSD wear. | |||||||||||||||||
| ▲ | hrmtst93837 9 hours ago | parent | prev [-] | ||||||||||||||||
If you want decent throughput and do not care about burning SSD write cycles on a box that was never meant to act like a tiny inference server, a used server with actual RAM is still the cheaper and less silly option. I woudn't expect Apple's warranty team to be much help. | |||||||||||||||||
| |||||||||||||||||