▲ | root_axis 4 days ago | |
It's true that local LLMs are only going to get better, but it's not clear they will become generally practical for the foreseeable future. There have been huge improvements to the reasoning and coding capabilities of local models, but most of that comes from refinements to training data and training techniques (e.g. RLHF, DPO, CoT etc), while the most important factor by far remains the capability to reduce hallucinations to comfortable margins using the raw statistical power you get with massive full-precision parameter counts. The hardware gap between today's SOTA models and what's available to the consumer are so massive that it'll likely be at least a decade before they become practical. |