| ▲ | roygbiv2 3 hours ago | |||||||
And how much does the hardware cost to run said models? | ||||||||
| ▲ | dboreham 2 hours ago | parent | next [-] | |||||||
You can run them slowly on any machine that has enough memory. | ||||||||
| ▲ | fragmede 2 hours ago | parent | prev [-] | |||||||
How good do you want it to be? For a close to ChatGPT today (April, 2026), you're still looking at a system with 7xH200+chassis, which will run you $300, or a GB200 NV72, which is $2-3 million. OTOH, a Qwen3.6 quantized model can be run on $10,000 (high end Mac) or $1,000 (Mac mini) worth of hardware. Even a Pixel 10 Pro cellphone ($1,000) can run useful models locally. | ||||||||
| ||||||||