Remix.run Logo
scrpgil 10 hours ago

The author assumes specialization only happens at the model layer. But there's a third option: general model + specialized context.

I built an MCP server that feeds a user's real schedule, tasks, and goals into Claude/ChatGPT. The model isn't specialized — but the output is, because the context is. No fine-tuning, no domain-specific training. Just structured data at inference time.

Domain-specific LLMs won't exist not because specialization is useless, but because it's cheaper to specialize the input than the model.