| ▲ | simianwords 9 hours ago | |
> We're already using domain-specific LLM's. The only LLM trained lawfully that I know of, KL3M, is also domain-specific. So, the title is already wrong. This looks like an "ethical" LLM but not domain specific. What is the domain here? > That's why domain-specific models are usually general models converted to domain-specific models by continued pretraining I've also wondered this, like with the case of the Codex model. My hunch is that a good general model trumps a pretrained model by just adding an appropriate system prompt. Which is why even OpenAI sorta recommends using GPT-5.4 over any Codex model. | ||