Remix.run Logo
ethan_smith 3 hours ago

They do, but that's kind of the article's point - someone still has to write and maintain the per-model chat template and tool call parsing inside vllm/sglang. Every time a new model ships with a slightly different format, the inference server needs an update. The M×N problem doesn't disappear, it just gets pushed one layer down.