| ▲ | root_axis 5 hours ago | |||||||
I have two A100s and have been playing with local models for years. There's definitely moments where they are quite impressive, but small context sizes and unreliability become immediately obvious. > For those of us a bit crazy, we are running KimiK2.6, GLM5.1 Yes, those can compare to Opus, but you can't run those unquantized for less than $400k in hardware. | ||||||||
| ▲ | doctorpangloss 5 hours ago | parent [-] | |||||||
Two Mac Studio M3 Ultra 512GB and 1 USB cable can run all those models - maybe about $30,000 in hardware - and based on my benchmarks, those Mac Studios were twice as fast as the A100s on Deepseek v4 Flash, which has a quantization but not really a lossy one. | ||||||||
| ||||||||