| ▲ | evgen 3 hours ago | |
I can run qwen3.6-27b on a four year-old Macbook Pro that dominates ChatGPT-4o (the frontier model from 2 years ago) and is competetitve against early ChatGPT-5 versions. We are also getting a lot smarter about using and deploying these local models. Your entire AI stack from two years ago would be absolutely crushed by a todays local LLM models and a high-end local inference system when combined with a good modern coding agent. | ||
| ▲ | vb-8448 25 minutes ago | parent [-] | |
Today open weights frontier models cannot run locally, unless quantization is used. Deep seek v4 pro require almost 1 TB of RAM in INT4. I hardly doubt there will be consumer grade HW to run it in 2 years either. And deep seek v4 pro is not even close to OAI or anthropic frontier models. | ||