Remix.run Logo
latchkey 4 days ago

> This is probably kimi trying to protect their brand from bargain basement providers that dont properly represent what the models are capable of.

I'm curious what exactly they mean by this...

"because we learned the hard way that open-sourcing a model is only half the battle."

HarHarVeryFunny 4 days ago | parent [-]

I'd take it at face value. Since they release open weights they would appear to genuinely want other providers to serve this as well as themselves, but the benefit of this depends on it being served accurately.

latchkey 4 days ago | parent [-]

I agree, but how about some details.

Onavo 3 days ago | parent | next [-]

Kimi, GLM, and Minimax are the "Big Three" of open source Chinese AI startups. There's also Qwen and DeepSeek but they are all subsidized by other lines of business.

The Chinese AI models are generally 5-6 months behind high end SOTA western models (and as of the time of this comment it's Opus 4.7 and ChatGPT 5.4 Thinking, it's rumored however that the Mythos and Spud codename models are even better).

To gain market share, the Chinese startup use open source as a distribution strategy and essentially made mid-high end AI a commodity. The best models are still Western but for any application that doesn't require the highest performance in the market or if there's a need for extensive customization or alignment (imagine if you are an oil rich petro state and you don't want your national AI strategy to be tied to liberal international order ideology).

It creates a lot of pricing pressure on the low and mid end, and it's also why Anthropic is desperately trying to go full B2B instead.

However if the third parties hosting the Chinese models at near cost doesn't perform good quality control, it ruins the strategy because customers are not inclined to use chinese models anymore (and first party hosting on chinese infrastructure is out of the question because of geopolitical reasons, so everybody hides behind the polite fiction of using resellers like OpenRouter, Fal.ai, Wavespeed, fireworks AI etc.).

ashirviskas 3 days ago | parent | prev [-]

I've been burned on openrouter getting routed through terrible quants with equally terrible quality. While paying maybe 15% less.

Nearly a year ago it was impossible to avoid it due to silly openrouter routing algorithm and the api. You had to set multiple things just right to make it work.

Similar to their other api quirks. You want valid json format response? sure, set response_format to "json" just like our documentation suggests. Oh, it only works some of the time? How silly, why would you expect it to work all of the time? If you want it to work more often, set require_params to true. We may still use other providers that don't offer it, but you want that, right? You don't? Well, then set our "very_require_params" to "very_true". And then switch a few toggles in the frontend. Oh and also add these 7 lines just so your other config options don't break. Oh wait they will break, how silly of us Is there any way to make it work as advertised? Of course no!

Sorry for the semi-offtopic rant. I still use them every day though, but not for open models anymore.