| ▲ | otabdeveloper4 3 hours ago | |||||||
We had good small language models for decades. (E.g. BERT) The entire point of LLMs is that you don't have to spend money training them for each specific case. You can train something like Qwen once and then use it to solve whatever classification/summarization/translation problem in minutes instead of weeks. | ||||||||
| ▲ | znnajdla 2 hours ago | parent [-] | |||||||
> The entire point of LLMs is that you don't have to spend money training them for each specific case. I don’t agree. I would say the entire point of LLMs is to be able to solve a certain class of non-deterministic problems that cannot be solved with deterministic procedural code. LLMs don’t need to be generally useful in order to be useful for specific business use cases. I as a programmer would be very happy to have a local coding agent like Claude Code that can do nothing but write code in my chosen programming language or framework, instead of using a general model like Opus, if it could be hyper-specialized and optimized for that one task, so that it is small enough to run on my MacBook. I don’t need the other general reasoning capabilities of Opus. | ||||||||
| ||||||||