| ▲ | nickpsecurity 9 hours ago | |
We're already using domain-specific LLM's. The only LLM trained lawfully that I know of, KL3M, is also domain-specific. So, the title is already wrong. Author is correct that intelligence is compounding. That's why domain-specific models are usually general models converted to domain-specific models by continued pretraining. Even general models, like H20's, have been improved by constraining them to domain-supporting, general knowledge in a second phase of pretraining. But, they're eventually domain specific. Outside LLM's, I think most models are domain-specific: genetics, stock prices, ECG/EKG scans, transmission shifying, seismic, climate, etc. LLM's trying to do everything are an exception to the rule that most ML is domain-specific. | ||
| ▲ | simianwords 9 hours ago | parent [-] | |
> We're already using domain-specific LLM's. The only LLM trained lawfully that I know of, KL3M, is also domain-specific. So, the title is already wrong. This looks like an "ethical" LLM but not domain specific. What is the domain here? > That's why domain-specific models are usually general models converted to domain-specific models by continued pretraining I've also wondered this, like with the case of the Codex model. My hunch is that a good general model trumps a pretrained model by just adding an appropriate system prompt. Which is why even OpenAI sorta recommends using GPT-5.4 over any Codex model. | ||