| ▲ | sigmar 4 hours ago | |||||||||||||
>various open-weighted Chinese models out there. They've kept good pace with flagship models, I don't think this is accurate. Maybe it will change in the future but it seems like the Chinese models aren't keeping up with actually training techniques, they're largely using distillation techniques. Which means they'll always be catching up and never at the cutting edge. https://x.com/Altimor/status/2024166557107311057 | ||||||||||||||
| ▲ | A_D_E_P_T 4 hours ago | parent | next [-] | |||||||||||||
> they're largely using distillation techniques. Which means they'll always be catching up and never at the cutting edge. You link to an assumption, and one that's seemingly highly motivated. Have you used the Chinese models? IMO Kimi K2.5 beats everything but Opus 4.6 and Gemini 3.1... and it's not exactly inferior to the latter, it's just different. It's much better at most writing tasks, and its "Deep Research" mode is by a wide margin the best in the business. (OpenAI's has really gone downhill for some reason.) | ||||||||||||||
| ||||||||||||||
| ▲ | parliament32 2 hours ago | parent | prev | next [-] | |||||||||||||
Does that actually matter? If "catching up" means "a few months behind" at worst for.. free? | ||||||||||||||
| ||||||||||||||
| ▲ | arthurcolle 3 hours ago | parent | prev [-] | |||||||||||||
I have been using a quorum composed of step-3.5-flash, Kimi k2.5 and glm-5 and I have found it outperforms opus-4.5 at a fraction of the cost That's pretty cutting edge to me. EDIT: It's not a swarm — it's closer to a voting system. All three models get the same prompt simultaneously via parallel API calls (OpenAI-compatible endpoints), and the system uses weighted consensus to pick a winner. Each model has a weight (e.g. step-3.5-flash=4, kimi-k2.5=3, glm-5=2) based on empirically observed reliability. The flow looks like:
The key insight is that cheap models in consensus are more reliable than a single expensive model. Any one of these models alone hallucinates or refuses more than the quorum does collectively. The refusal filtering is especially useful — if one model over-refuses, the others compensate.Tooling: it's a single Python agent (~5200 lines) with protocol-based tool dispatch — 110+ operations covering filesystem, git, web fetching, code analysis, media processing, a RAG knowledge base, etc. The quorum sits in front of the LLM decision layer, so the agent autonomously picks tools and chains actions. Purpose is general — coding, research, data analysis, whatever. I won't include it for length but I just kicked off a prompt to get some info on the recent Trump tariff Supreme Court decision: it fetched stock data from Benzinga/Google Finance, then researched the SCOTUS tariff ruling across AP, CNN, Politico, The Hill, and CNBC, all orchestrated by the quorum picking which URLs to fetch and synthesizing the results, continuing until something like 45 URLs were fully processed. Output was longer than a typical single chatbot response, because you get all the non-determinism from what the models actually ended up doing in the long-running execution, and then it needs to get consensus, which means all of the responses get at least one or N additional passes across the other models to get to that consensus. | ||||||||||||||
| ||||||||||||||