Remix.run Logo
syntaxing 6 hours ago

Hacker News strongly believes Opus 4.5 is the defacto standard and China was consistently 8+ month behind. Curious how this performs. It’ll be a big inflection point if it performs as well as its benchmarks.

Flavius 6 hours ago | parent | next [-]

Based on their own published benchmarks, it appears that this model is at least 6 months behind.

spwa4 6 hours ago | parent [-]

Strange how things evolve. When ChatGPT started it had about 2 years headstart over Google's best proprietary model, and more than 2 years ahead to open source models.

Now they have to be lucky to be 6 months ahead to an open model with at most half the parameter count, trained on 1%-2% the hardware US models are trained on.

rglullis 5 hours ago | parent | next [-]

And more than that, the need for people/business to pay the premium for SOTA getting smaller and smaller.

I thought that OpenAI was doomed the moment that Zuckerberg showed he was serious about commoditizing LLM. Even if llama wasn't the GPT killer, it showed that there was no secret formula and that OpenAI had no moat.

NitpickLawyer 5 hours ago | parent [-]

> that OpenAI had no moat.

Eh. It's at least debatable. There is a moat in compute (this was openly stated at a meeting of AI tech ceos in china, recently). And a bit of a moat in architecture and know-how (oAI gpt-oss is still best in class, and if rumours are to be believed, it was mostly trained on synthetic data, a la phi4 but with better data). And there are still moats around data (see gemini family, especially gemini3).

But if you can conjure up compute, data and basic arch, you get xAI which is up there with the other 3 labs in SotA-like performance. So I'd say there are some moats, but they aren't as safe as they'd thought they'd be in 2023, for sure.

rbtprograms 5 hours ago | parent | prev [-]

it seems they believed that superior models would be the moat, but when deepseek essentially replicated o1 they switched to the ecosystem as the moat.

oersted 5 hours ago | parent | prev [-]

In my experience GPT-5.2 with extra-high thinking is consistently a bit better and significantly cheaper (even when I use the Fast version which is 2x the price in Cursor).

The HN obsession with Claude Code might be a bit biased by people trying to justify their expensive subscriptions to themselves.

However, Opus 4.5 is much faster and very high quality too, and that ends up mattering more in practice. I end up using it much more and paying a dear but worthwhile price for it.

PS: Despite what the benchmarks say, I find Gemini 3 Pro and Flash to be a step below Claude and GPT, although still great compared to the state-of-the-art last year, and very fast and cheap. Gemini also seems to have a less AI sounding writing-style.

I am aware this is all quite vague and anecdotal, just my two cents.

I do think these kinds of opinions are valuable. Benchmarks are a useful reference, but they do give the illusion of certainty to something that is fundamentally much harder to measure and quite subjective.

manmal 3 hours ago | parent | next [-]

Better, yes, but cheaper - only when looking at API costs I guess? Who in their right mind uses the API instead of the subsidized plans? There, Opus is way cheaper in terms of subsidized tokens.

anonzzzies 2 hours ago | parent | prev | next [-]

You are using opus via api? 200$/mo is nothing for what I get for it so not sure how it is considered expensive. I guess it is how you it; I hit the limits every day. Using the API, I would indeed be paying through the nose but why would anyone?

keyle 3 hours ago | parent | prev [-]

My experience exactly.