| ▲ | nostromo 2 hours ago |
| They'll all do this eventually. We're in the part of the market cycle where everyone fights for marketshare by selling dollar bills for 50 cents. When a winner emerges they'll pull the rug out from under you and try to wall off their garden. Anthropic just forgot that we're still in the "functioning market competition" phase of AI and not yet in the "unstoppable monopoly" phase. |
|
| ▲ | barrenko an hour ago | parent | next [-] |
| "Naveen Rao, the Gen AI VP of Databricks, phrased it quite well: all closed AI model providers will stop selling APIs in the next 2-3 years. Only open models will be available via APIs (…) Closed model providers are trying to build non-commodity capabilities and they need great UIs to deliver those. It's not just a model anymore, but an app with a UI for a purpose." ~ https://vintagedata.org/blog/posts/model-is-the-product A. Doria > new Amp Free (10$) access is also closed up since of last night |
|
| ▲ | bambax 2 hours ago | parent | prev | next [-] |
| Unstoppable monopoly will be extremely hard to pull off given the number of quality open (weights) alternatives. I only use LLMs through OpenRouter and switch somewhat randomly between frontier models; they each have some amount of personality but I wouldn't mind much if half of them disappeared overnight, as long as the other half remained available. |
| |
| ▲ | nostromo 2 hours ago | parent | next [-] | | I'm old, so I remember saying the same thing about Google and search. I hope you're right! | | |
| ▲ | ar0 an hour ago | parent | next [-] | | I think the big difference is that Google is free: everyone is using Google because it doesn’t cost anything and for a long time was the best search engine out there. I am sure that if Google would suddenly charge a few dollars per month for access, Bing market share would explode overnight, because it would become “good enough but cheaper”. With the AI models, using a model that is “good enough but cheaper” is already an option. | | |
| ▲ | safety1st an hour ago | parent [-] | | There's no reason that a sizeable portion of LLM usage can't and won't end up free/ad-sponsored. Cutting edge stuff for professional use will probably be monetized via subscription or API credits for a long time to come. But running an older and less resource intensive model works just fine for tasks like summarization. These models will just become another feature in a "free" product that people pay for by watching or clicking ads. I imagine the split will look a lot like b2b vs b2c in other technologies, b2b customers tend to be willing to pay for tech when it offers a competitive advantage, reduces their operating costs etc. b2c customers mostly just guzzle free slop. |
| |
| ▲ | bambax an hour ago | parent | prev | next [-] | | I too am old. Google search is free, hard to replicate, and while there used to be lots of search engines, Google was (and arguably still is) miles ahead of all the others in terms of quality and performance. A model is hard to train but it doesn't need to be hyper up to date / have a new version come out every day. Inference is cheap (it seems?) and quality is comparable. So it's unclear how expensive offerings could win over free alternatives. I could be wrong of course. I don't have a crystal ball. I just don't think this is the same as Google. Of course I could be entirely mistaken and there could emerge a single winner | | |
| ▲ | mentalgear an hour ago | parent [-] | | I would say Google's monopoly mainly comes from its name recognition, definitely not because its still ahead in core search as I have been using DuckDuckGo for 2 years once I noticed search results are the same or better than Google. |
| |
| ▲ | juliendorra 2 hours ago | parent | prev [-] | | In the first years, I remember no other search engine was close to Google quality. We all ditched AltaVista because Google was incredibly better. It would have been awful to switch back to any other options.
We can already switch between the 3 big proprietary models without feeling too much differences, so it’s quite a different landscape. | | |
| |
| ▲ | ai-x 2 hours ago | parent | prev | next [-] | | This is saying we have hundreds of open source OSes and Windows will never be a monopoly. Software always gets monopoly simply by usage. Every time a model gets used by esoteric use cases, it gets more training data (that a decentralized open weight model doesn't get) and it starts developing its moat. | | |
| ▲ | illiac786 2 hours ago | parent | next [-] | | I think windows has historical monopoly. They bundled it with PC hw and the vast majority of apps only ever got published for windows, and this over decades (one would argue it’s still true). The starting point for LLMs is very different. Who would publish today a software that only integrates with chatGPT? Only a small minority. Thus I agree, I struggle to see how a monopoly can exist here. A GPU monopoly or duopoly though, perhaps. | |
| ▲ | kelipso 2 hours ago | parent | prev | next [-] | | It’ll be a bunch of tiny moats in that scenario. LLMs are way too generic, adaptable, flexible in how you use it to make a big most out of it. | |
| ▲ | JumpCrisscross 2 hours ago | parent | prev | next [-] | | > Software always gets monopoly simply by usage Most software isn't made by monopolies. More directly, enterprise-software stocks are getting hammered because AI offers them competition. | |
| ▲ | nl an hour ago | parent | prev | next [-] | | It's more like saying AWS has a monopoly on virtual machine hosting. (For those unaware, AWS doesn't have a VM monopoly, and the market dynamics seem similar) | |
| ▲ | Barrin92 an hour ago | parent | prev [-] | | >This is saying we have hundreds of open source OSes we don't, we have about 3 operating systems that have the decades of hardware and software compatibility that makes them widely usable. They're the most complex and complicated things we've built. LLMs are a few thousand lines of python hooked up to a power plant and graphics cards. This is the least defensible piece of software there ever has been. |
| |
| ▲ | dolphenstein an hour ago | parent | prev | next [-] | | OpenRouter falls in the acceptable use category. They targeting users that are misusing their Claude OAuth token on non-Anthropic products. | |
| ▲ | curtisblaine an hour ago | parent | prev [-] | | They will [try to] ban open weights for ethics / security reasons: to stop spammers, to protect children, to stop fascism, to defend minorities. Take your pick; it won't matter why, it will only matter which media case can they thrust in the spotlight first. | | |
| ▲ | bambax 39 minutes ago | parent [-] | | Yes of course they will; the CEO of Anthropic makes that argument, very openly, all the time. But it will be hard to do, I think. |
|
|
|
| ▲ | JumpCrisscross 2 hours ago | parent | prev [-] |
| > They'll all do this eventually And if the frontier continues favouring centralised solutions, they'll get it. If, on the other hand, scaling asymptotes, the competition will be running locally. Just looking at how much Claude complains about me not paying for SSO-tier subscriptions to data tools when they work perfectly fine in a browser is starting to make running a slower, less-capable model locally competitive with it in some research contexts. |