| ▲ | ttoinou 8 hours ago | ||||||||||||||||||||||||||||
With M3 Max with 64GB of unified ram you can code with a local LLM, so the bar is much lower | |||||||||||||||||||||||||||||
| ▲ | Greed 5 hours ago | parent [-] | ||||||||||||||||||||||||||||
But why? Spending several thousand dollars to run sub-par models when the break-even point could still be years away seems bizarre for any real usecase where your goal is productivity over novelty. Anyone who has used Codex or Opus can attest that the difference between those and a locally available model like Qwen or Codestral is night and day. To be clear, I totally get the idea of running local LLMs for toy reasons. But in a business context the sell on a stack of Mac Pros seems misguided at best. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||