Remix.run Logo
echelon 11 hours ago

> Local llm or nothing at all.

I'm not as familiar with LLMs as I am media models, but there can't seriously be local contenders for beating Opus, GPT-5, etc. Right?

At home hardware isn't good enough.

Nobody "far enough behind" that isn't scared to release their model as open weights actually has a competitive model within 70% of the lead models.

Now that the Chinese are catching up and even pulling ahead (eg. in video), they've stopped releasing the weights.

Stragglers release weights. And those weights aren't competitive.

Am I missing something?

zozbot234 5 hours ago | parent [-]

GLM and Kimi are still releasing weights for near-SOTA models. DeepSeek, Qwen and arguably MiniMax are the ones that are perhaps falling behind.