| ▲ | PlatoIsADisease 3 hours ago | ||||||||||||||||
We are getting into a debate between particulars and universals. To call the 'unified memory' VRAM is quite a generalization. Whatever the case, we can tell from stock prices that whatever this VRAM is, its nothing compared to NVIDIA. Anyway, we were trying to run a 70B model on a macbook(can't remember which M model) at a fortune 20 company, it never became practical. We were trying to compare strings of character length ~200. It was like 400-ish characters plus a pre-prompt. I can't imagine this being reasonable on a 1T model, let alone the 400B models of deepseek and LLAMA. | |||||||||||||||||
| ▲ | Gracana 2 hours ago | parent | next [-] | ||||||||||||||||
With 32B active parameters, Kimi K2.5 will run faster than your 70B model. | |||||||||||||||||
| ▲ | simonw 3 hours ago | parent | prev [-] | ||||||||||||||||
Here's a video of a previous 1T K2 model running using MLX on a a pair of Mac Studios: https://twitter.com/awnihannun/status/1943723599971443134 - performance isn't terrible. | |||||||||||||||||
| |||||||||||||||||