| ▲ | dial9-1 12 hours ago | |||||||||||||||||||||||||||||||
still waiting for the day I can comfortably run Claude Code with local llm's on MacOS with only 16gb of ram | ||||||||||||||||||||||||||||||||
| ▲ | bearjaws 4 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||
My super uninformed theory is that local LLM will trail foundation models by about 2 years for practical use. For example right now a lot of work is being done on improving tool calling and agentic workflows, which tool calling was first popping up around end of 2023 for local LLMs. This is putting aside the standard benchmarks which get "benchmaxxed" by local LLMs and show impressive numbers, but when used with OpenCode rarely meet expectations. In theory Qwen3.5-397B-A17B should be nearly a Sonnet 4.6 model but it is not. | ||||||||||||||||||||||||||||||||
| ▲ | rubymamis 6 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Doesn't OpenCode supports local models? | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | gedy 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
How close is this? It says it needs 32GB min? | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
| ▲ | 3yr-i-frew-up 6 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||
[dead] | ||||||||||||||||||||||||||||||||