Remix.run Logo
hn_throwaway_99 4 hours ago

Your comment is responding to an issue that is different from what GP said. GP was talking about Chinese open source particularly, i.e. their open source models, which AFAIK have consistently been keeping up with (albeit a few steps behind) the closed source OpenAI and Anthropic models.

Hardware capacity is a separate issue entirely.

CharlieDigital 4 hours ago | parent [-]

    > have consistently been keeping up with (albeit a few steps behind) 
I mean, this sentence is self contradictory, no?

    > Hardware capacity is a separate issue entirely.
It seems like hardware capabilities are at the very heart of both training and inference which is why Nvidia, TSMC are hitting record income and capitalization. Feels like divorcing hardware from the equation is discounting a big part of winning this race.
roenxi 4 hours ago | parent | next [-]

> I mean, this sentence is self contradictory, no?

By benchmarks, the Chinese models are ahead of where the proprietary US models were ... something like 6 or 12 months ago. And all the benchmarks are a bit fuzzy anyway on whether a small gap is trivial or significant. The Chinese aren't having any problems keeping up on model quality. The gap isn't going to lead to any difference that matters unless the US pulls a rabbit out of it's hat.

Plus dollar-for-performance they might be leading in practice, it is hard to compete with self hosted.

conception 3 hours ago | parent | prev [-]

You can keep up even if you’re behind. If someone is running a race and you’re constantly two seconds behind their time, you are steps behind but keeping up.