| ▲ | Jcampuzano2 3 hours ago |
| A jump that we will never be able to use since we're not part of the seemingly minimum 100 billion dollar company club as requirement to be allowed to use it. I get the security aspect, but if we've hit that point any reasonably sophisticated model past this point will be able to do the damage they claim it can do. They might as well be telling us they're closing up shop for consumer models. They should just say they'll never release a model of this caliber to the public at this point and say out loud we'll only get gimped versions. |
|
| ▲ | cedws 2 hours ago | parent | next [-] |
| More than killer AI I'm afraid of Anthropic/OpenAI going into full rent-seeking mode so that everyone working in tech is forced to fork out loads of money just to stay competitive on the market. These companies can also choose to give exclusive access to hand picked individuals and cut everyone else off and there would be nothing to stop them. This is already happening to some degree, GPT 5.3 Codex's security capabilities were given exclusively to those who were approved for a "Trusted Access" programme. |
| |
| ▲ | TypesWillSaveUs 2 hours ago | parent | next [-] | | Describing providing a highly valuable service for money as `rent seeking` is pretty wild. | | |
| ▲ | bertil an hour ago | parent | next [-] | | It could be, formally, if they have a monopoly. However, I’m tempted to compare to GitHub: if I join a new company, I will ask to be included to their GitHub account without hesitation. I couldn’t possibly imagine they wouldn’t have one. What makes the cost of that subscription reasonable is not just GitHub’s fear a crowd with pitchforks showing to their office, by also the fact that a possible answer to my non-question might be “Oh, we actually use GitLab.” If Anthropic is as good as they say, it seems fairly doable to use the service to build something comparable: poach a few disgruntled employees, leverage the promise to undercut a many-trillion-dollar company to be a many-billion dollar company to get investors excited. I’m sure the founders of Anthropic will have more money than they could possibly spend in ten lifetimes, but I can’t imagine there wouldn’t be some competition. Maybe this time it’s different, but I can’t see how. | | |
| ▲ | johnsimer 32 minutes ago | parent [-] | | > It could be, formally, if they have a monopoly. you have 2 labs at the forefront (Anthropic/OpenAI), Google closely behind, xAI/Meta/half a dozen chinese companies all within 6-12 months. There is plenty of competition and price of equally intelligent tokens rapidly drop whenever a new intelligence level is achieved. Unless the leading company uses a model to nefariously take over or neutralize another company, I don't really see a monopoly happening in the next 3 years. |
| |
| ▲ | 1attice 2 hours ago | parent | prev [-] | | My housing is pretty valuable. I pay rent. Which timeline are you in? | | |
| |
| ▲ | aspenmartin 2 hours ago | parent | prev | next [-] | | Well don’t forget we still have competition. Were anthropic to rent seek OpenAI would undercut them. Were OpenAI and anthropic to collude that would be illegal. For anthropic to capture the entire coding agent market and THEN rent seek, these days it’s never been easier to raise $1B and start a competing lab | | |
| ▲ | cedws 2 hours ago | parent [-] | | In practice this doesn't work though, the Mastercard-Visa duopoly is an example, two competing forces doesn't create aggressive enough competition to benefit the consumer. The only hope we have is the Chinese models, but it will always be too expensive to run the full models for yourself. | | |
| ▲ | brokencode 2 hours ago | parent | next [-] | | New companies can enter this space. Google’s competing, though behind. Maybe Microsoft, Meta, Amazon, or Apple will come out with top notch models at some point. There is no real barrier to a customer of Anthropic adopting a competing model in the future. All it takes is a big tech company deciding it’s worth it to train one. On the other hand, Visa/Mastercard have a lot of lock-in due to consumers only wanting to get a card that’s accepted everywhere, and merchants not bothering to support a new type of card that no consumer has. There’s a major chicken and egg problem to overcome there. | |
| ▲ | sghiassy 2 hours ago | parent | prev [-] | | Chinese competition can always be banned. Example: Chinese electric car competition | | |
| ▲ | sho_hn 2 hours ago | parent | next [-] | | That's what OP was saying, I think, noting that running them locally won't be a solution. | |
| ▲ | oblio an hour ago | parent | prev [-] | | Also Chinese smartphones. Huawei was about 12-18 months from becoming the biggest smartphone manufacturer in the world a few years ago. If it would have been allowed to sell its phones freely in the US I'm fairly sure Apple would have been closer to Nokia than to current day Apple. | | |
| ▲ | aurareturn 42 minutes ago | parent [-] | | If Huawei was never banned from using TSMC, they'd likely have a real Nvidia competitor and may have surpassed Apple in mobile chip designs. They actually beat Apple A series to become the first phone to use the TSMC N7 node. |
|
|
|
| |
| ▲ | therealdeal2020 an hour ago | parent | prev | next [-] | | but you are assuming that the magical wizards are the only ones who can create powerful AIs... mind you these people have been born just few decades ago. Their knowledge will be transferred and it will only take a few more decades until anyone can train powerful AIs ... you can only sit on tech for so long before everyone knows how to do it | | |
| ▲ | cedws 37 minutes ago | parent [-] | | It's not a matter of knowledge, it's a matter of resources. It takes billions of dollars of hardware to train a SOTA LLM and it's increasing all the time. You cannot possibly hope to compete as an independent or small startup. | | |
| ▲ | block_dagger a few seconds ago | parent [-] | | Presumably, the hardware to run this level of model will be democratized within the timeframe of the parent comment. |
|
| |
| ▲ | MattRix an hour ago | parent | prev [-] | | The thing is that the current models can ALREADY replicate most software-based products and services on the market. The open source models are not far behind. At a certain point I'm not sure it matters if the frontier models can do faster and better. I see how they're useful for really complex and cutting edge use cases, but that's not what most people are using them for. |
|
|
| ▲ | guzfip 2 hours ago | parent | prev | next [-] |
| > A jump that we will never be able to use since we're not part of the seemingly minimum 100 billion dollar company club as requirement to be allowed to use it. > They should just say they'll never release a model of this caliber to the public at this point and say out loud we'll only get gimped Duh, this was fucking obvious from the start. The only people saying otherwise were zealots who needed a quick line to dismiss legitimate concerns. |
|
| ▲ | quotemstr 2 hours ago | parent | prev [-] |
| This is why the EAs, and their almost comic-book-villain projects like "control AI dot com" cannot be allowed to win. One private company gatekeeping access to revolutionary technology is riskier than any consequence of the technology itself. |
| |
| ▲ | scrawl an hour ago | parent | next [-] | | Having done a quick search of "control AI dot com", it seems their intent is educate lawmakers & government in order to aid development of a strong regulatory framework around frontier AI development. Not sure how this is consistent with "One private company gatekeeping access to revolutionary technology"? | | |
| ▲ | quotemstr 42 minutes ago | parent [-] | | > strong regulatory framework around frontier AI development You have to decode feel-good words into the concrete policy. The EAs believe that the state should prohibit entities not aligned with their philosophy to develop AIs beyond a certain power level. |
| |
| ▲ | frozenseven 2 hours ago | parent | prev | next [-] | | Couldn't agree more. The "safest" AI company is actually the biggest liability. I hope other companies make a move soon. | |
| ▲ | FeepingCreature 2 hours ago | parent | prev [-] | | No it isn't lol. The consequence of the technology literally includes human extinction. I prefer 0 companies, but I'll take 1 over 5. |
|