Remix.run Logo
__jl__ 3 hours ago

What a model mess!

OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4. There version numbers jump across different model lines with codex at 5.3, what they now call instant also at 5.3.

Anthropic are really the only ones who managed to get this under control: Three models, priced at three different levels. New models are immediately available everywhere.

Google essentially only has Preview models! The last GA is 2.5. As a developer, I can either use an outdated model or have zero insurances that the model doesn't get discontinued within weeks.

strongpigeon 3 hours ago | parent | next [-]

> Google essentially only has Preview models! The last GA is 2.5. As a developer, I can either use an outdated model or have zero insurances that the model doesn't get discontinued within weeks.

What's funny is that there is this common meme at Google: you can either use the old, unmaintained tool that's used everywhere, or the new beta tools that doesn't quite do what you want.

Not quite the same, but it did remind me of it.

fhrow4484 3 hours ago | parent | next [-]

https://static0.anpoimages.com/wordpress/wp-content/uploads/...

peab 34 minutes ago | parent | next [-]

such a great meme

CactusBlue 2 hours ago | parent | prev | next [-]

Reminds of Unity features

yieldcrv 2 hours ago | parent | prev | next [-]

Preview Road (only choice, and last preview was deprecated without warning)

goodmythical an hour ago | parent [-]

where's my nightly road?

Who knows, I might arrive before I depart.

madeofpalk an hour ago | parent | prev [-]

oh is this about my workplace?

L-four 3 hours ago | parent | prev | next [-]

Gmail was in beta for 5 years, until 2009.

metalliqaz 2 hours ago | parent [-]

"Gemini, translate 'beta' from Googlespeak to English."

"Ok, here is the translation:"

    'we don't want to offer support'
solarkraft 2 hours ago | parent | next [-]

Just like any Google product then.

cyanydeez 2 hours ago | parent | prev [-]

Nah, it's "We dont want to provide a consistent model that we'll be stuck with supporting for a decade because it just takes up space; until we run everyone out of business, we can't afford to have customers tying their systems to any given model"

Really, the economics makes no sense, but that's what they're doing. You can't have a consistent model because it'll pin their hardware & software, and that costs money.

msikora 3 minutes ago | parent [-]

I have a service that relies on NanoBanana Pro, but the availability has been so atrocious that we just might go back to OpenAI.

m_fayer 3 hours ago | parent | prev | next [-]

My 5ish years in the mines of Android native back in the day are not years I recall fondly. Never change, Google.

jakub_g 3 hours ago | parent | prev | next [-]

"Everything is beta or deprecated."

cyanydeez 2 hours ago | parent | prev [-]

The business models of LLMs don't include any garuntee, and some how that's fine for a burgeoning decade of trillions of dollars of consumption.

Sure, makes total sense guys.

Aurornis 3 hours ago | parent | prev | next [-]

> What a model mess! OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4.

I don't know, this feels unnecessarily nitpicky to me

It isn't hard to understand that 5.4 > 5.2 > 5.1. It's not hard to understand that the dash-variants have unique properties that you want to look up before selecting.

Especially for a target audience of software engineers skipping a version number is a common occurrence and never questioned.

Melatonic 2 hours ago | parent [-]

Agreed - and its a huge step up from their previous naming schemes. That stuff was confusing as hell

__jl__ 2 hours ago | parent [-]

I see your point. I do find Anthropic's approach more clean though particularly when you add in mini and nano. That makes 5 models priced differently. Some share the same core name, others don't: gpt 5 nano, gpt 5 mini, gpt 5.1, gpt 5.2, gpt 5.4. And we are not even talking about thinking budget.

But generally: These are not consumer facing products and I agree that someone who uses the API should be able to figure out the price point of different models.

jbonatakis 2 hours ago | parent | prev | next [-]

Google is already sending notices that the 2.5 models will be deprecated soon while all the 3.x models are in preview. It really is wild and peak Google.

boringg an hour ago | parent [-]

Like building on quicksand for dependencies. I guess though the argument is that the foundation gets stronger over time

bethekidyouwant an hour ago | parent [-]

What dependancy could possibly be tied to a non deterministic ai model? Just include the latest one at your price point.

jbonatakis an hour ago | parent [-]

Well it’s not even performance (define that however you will), but behavior is definitely different model to model. So while whatever new model is released might get billed as an improvement, changing models can actually meaningfully impact the behavior of any app built on top of it.

0xbadcafebee 3 hours ago | parent | prev | next [-]

> or have zero insurances that the model doesn't get discontinued within weeks

Why are you using the same model after a month? Every month a better model comes out. They are all accessible via the same API. You can pay per-token. This is the first time in, like, all of technology history, that a useful paid service is so interoperable between providers that switching is as easy as changing a URL.

phainopepla2 2 hours ago | parent | next [-]

If you're trying to use LLMs in an enterprise context, you would understand. Switching models sometimes requires tweaking prompts. That can be a complete mess, when there are dozens or hundreds of prompts you have to test.

bethekidyouwant an hour ago | parent [-]

This sounds made up. Much like “prompt engineering” Let’s hear an actual example

mcint 4 minutes ago | parent [-]

Enterprises moving slow, or preferring to remain on old technology that they already know how to work...is received wisdom in hn-adjacent computing, a truism known and reported for more than 3 decades (5 decades since the Mythical Man-Month).

Sounds like someone who's responsible, on the hook, for a bunch of processes, repeatable processes (as much as LLM driven processes will be), operating at scale.

Just in the open, tools like open-webui bolts on evals so you can compare: how different models, including new ones, perform on the tasks that you in particular care about.

Indeed LLM model providers mainly don't release models that do worse on benchmarks—running evals is the same kind of testing, but outside the corporate boundary, pre-release feedback loop, and public evaluation.

https://chatgpt.com/share/69aa1972-ae84-800a-9cb1-de5d5fd7a4...

hobofan 2 hours ago | parent | prev [-]

That's true only in theory, but not in practice. In practice every inference provider handles errors (guardrails, rate limits) somewhat differently and with different quirks, some of which only surface in production usage, and Google is one of the worst offenders in that regard.

beklein an hour ago | parent | prev | next [-]

Not sure why you think Anthropic has not the same problems? Their version numbers across different model lines jump around too... for Opus we have 4.6, 4.5, 4.1 then we have Sonnet at 4.6, 4.5, and 4.1? No version 4.1 here, and there is Haiku, no 4.6, but 4.5 and no 4.1, no 4 but then we only have old 3.5...

Also their pricing based on 5m/1h cache hits, cash read hits, additional charges for US inference (but only for Opus 4.6 I guess) and optional features such as more context and faster speed for some random multiplier is also complex and actually quiet similar to OpenAI's pricing scheme.

To me it looks like everybody has similar problems and solutions for the same kinds of problems and they just try their best to offer different products and services to their customers.

selcuka 5 minutes ago | parent | next [-]

With Anthropic you always have 3 models to choose from: Opus-latest, Sonnet-latest, and Haiku-latest, from the best/slowest to the worst/fastest.

The version numbers are mostly irrelevant as afaik price per token doesn't change between versions.

svachalek 23 minutes ago | parent | prev [-]

It's much more consistent. Only 3 lines, numbered 4.6, 4.6, and 4.5, and it's clear they're tiers and not alternate product lines. It wasn't until recently that GPT seems to have any kind of naming convention at all and it's not intuitive if every version number is a whole different class of tool.

The pricing is more complex but also easy, Opus > Sonnet > Haiku no matter how you tweak those variables.

CobrastanJorji 2 hours ago | parent | prev | next [-]

> Google essentially only has Preview models.

It's really nice to see Google get back to its roots by launching things only to "beta" and then leaving them there for years. Gmail was "beta" for at least five years, I think.

FINDarkside an hour ago | parent [-]

Also, GCP Cloud Run domain mapping, pretty fundamental feature for cloud product, has been in "preview" for over 5 years now.

embedding-shape 3 hours ago | parent | prev | next [-]

> OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4.

I guess that's true, but geared towards API users.

Personally, since "Pro Mode" became available, I've been on the plan that enables that, and it's one price point and I get access to everything, including enough usage for codex that someone who spends a lot of time programming, never manage to hit any usage limits although I've gotten close once to the new (temporary) Spark limits.

awad an hour ago | parent | prev | next [-]

Incredibly curious how Google's approach to support, naming, versioning etc will mesh with the iOS integration.

abustamam 31 minutes ago | parent | prev | next [-]

I mean, Google notoriously discontinues even non-beta software, so if your concern is that there's insurance that the model doesn't get discontinued, then you may as well just use whatever you want since GA could also get discontinued.

biophysboy 2 hours ago | parent | prev | next [-]

Wow, is that what preview means? I see those model options in github copilot (all my org allows right now) - I was under the impression that preview means a free trial or a limited # of queries. Kind of a misleading name..

snug 9 minutes ago | parent [-]

Pretty common to call something that isn't ready a preview

raincole 3 hours ago | parent | prev | next [-]

They aggressively retire models, so GPT 5.1 and 5.2 are probably going to go soon.

hobofan 2 hours ago | parent [-]

In the Azure Foundry, they list GPT 5.2 retirement as "No earlier than 2027-05-12" (it might leave OpenAIs normal API earlier than that). I'm pretty certain that Gemini 3, which isn't even in GA yet will be retired earlier than that.

delaminator 3 hours ago | parent | prev | next [-]

two great problems in computing

naming things

cache invalidation

off by one errors

rurban 2 hours ago | parent [-]

Biggest problem right now in computing:

Out of tokens until end of month

arthurcolle 3 hours ago | parent | prev | next [-]

There is a lot of opportunity here for the AI infrastructure layer on top of tier-1 model providers

motoxpro 3 hours ago | parent [-]

This is what clouds like AWS, Azure, and GCP solve (vertex AI, etc). They are already an abstraction on top of the model makers with distribution built in.

I also don't believe there is any value in trying to aggregate consumers or businesses just to clean up model makers names/release schedule. Consumers just use the default, and businesses need clarity on the underlying change (e.g. why is it acting different? Oh google released 3.6)

arthurcolle 2 hours ago | parent [-]

Do the end users really care about the models at all, or about the effects that the models can cause?

m3kw9 2 hours ago | parent | prev [-]

thats how they had it for years, is a mess, but controlled