| |
| ▲ | varispeed 6 minutes ago | parent | next [-] | | Sounds like a good opportunity to pause spending on nerfed 4.6 and wait for the new model to be released and then max out over 2 weeks before it gets nerfed again. | |
| ▲ | enraged_camel 3 hours ago | parent | prev | next [-] | | That does not sound very believable. Last time Anthropic released a flagship model, it was followed by GPT Codex literally that afternoon. | | |
| ▲ | cyanydeez 2 hours ago | parent [-] | | Ya'll know they're teaching to the test. I'll wait till someone devises a novel test that isn't contained in the datasets. Sure, they're still powerful. |
| |
| ▲ | swalsh an hour ago | parent | prev [-] | | My understanding is GPT 6 works via synaptic space reasoning... which I find terrifying. I hope if true, OpenAI does some safety testing on that, beyond what they normally do. | | |
| ▲ | tyre 19 minutes ago | parent | next [-] | | From the recent New Yorker piece on Sam: “My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.” | |
| ▲ | levocardia 39 minutes ago | parent | prev | next [-] | | Oh you mean literally the thing in AI2027 that gets everyone killed? Wonderful. | |
| ▲ | notrealyme123 41 minutes ago | parent | prev | next [-] | | That's sounds really interesting. Do you have some hints where to read more? | |
| ▲ | arm32 38 minutes ago | parent | prev [-] | | Oh, of course they will /s |
|
|
| |
| ▲ | cedws 3 hours ago | parent | next [-] | | More than killer AI I'm afraid of Anthropic/OpenAI going into full rent-seeking mode so that everyone working in tech is forced to fork out loads of money just to stay competitive on the market. These companies can also choose to give exclusive access to hand picked individuals and cut everyone else off and there would be nothing to stop them. This is already happening to some degree, GPT 5.3 Codex's security capabilities were given exclusively to those who were approved for a "Trusted Access" programme. | | |
| ▲ | TypesWillSaveUs 2 hours ago | parent | next [-] | | Describing providing a highly valuable service for money as `rent seeking` is pretty wild. | | |
| ▲ | bertil an hour ago | parent | next [-] | | It could be, formally, if they have a monopoly. However, I’m tempted to compare to GitHub: if I join a new company, I will ask to be included to their GitHub account without hesitation. I couldn’t possibly imagine they wouldn’t have one. What makes the cost of that subscription reasonable is not just GitHub’s fear a crowd with pitchforks showing to their office, by also the fact that a possible answer to my non-question might be “Oh, we actually use GitLab.” If Anthropic is as good as they say, it seems fairly doable to use the service to build something comparable: poach a few disgruntled employees, leverage the promise to undercut a many-trillion-dollar company to be a many-billion dollar company to get investors excited. I’m sure the founders of Anthropic will have more money than they could possibly spend in ten lifetimes, but I can’t imagine there wouldn’t be some competition. Maybe this time it’s different, but I can’t see how. | | |
| ▲ | johnsimer 35 minutes ago | parent [-] | | > It could be, formally, if they have a monopoly. you have 2 labs at the forefront (Anthropic/OpenAI), Google closely behind, xAI/Meta/half a dozen chinese companies all within 6-12 months. There is plenty of competition and price of equally intelligent tokens rapidly drop whenever a new intelligence level is achieved. Unless the leading company uses a model to nefariously take over or neutralize another company, I don't really see a monopoly happening in the next 3 years. |
| |
| ▲ | 1attice 2 hours ago | parent | prev [-] | | My housing is pretty valuable. I pay rent. Which timeline are you in? | | |
| |
| ▲ | aspenmartin 2 hours ago | parent | prev | next [-] | | Well don’t forget we still have competition. Were anthropic to rent seek OpenAI would undercut them. Were OpenAI and anthropic to collude that would be illegal. For anthropic to capture the entire coding agent market and THEN rent seek, these days it’s never been easier to raise $1B and start a competing lab | | |
| ▲ | cedws 2 hours ago | parent [-] | | In practice this doesn't work though, the Mastercard-Visa duopoly is an example, two competing forces doesn't create aggressive enough competition to benefit the consumer. The only hope we have is the Chinese models, but it will always be too expensive to run the full models for yourself. | | |
| ▲ | brokencode 2 hours ago | parent | next [-] | | New companies can enter this space. Google’s competing, though behind. Maybe Microsoft, Meta, Amazon, or Apple will come out with top notch models at some point. There is no real barrier to a customer of Anthropic adopting a competing model in the future. All it takes is a big tech company deciding it’s worth it to train one. On the other hand, Visa/Mastercard have a lot of lock-in due to consumers only wanting to get a card that’s accepted everywhere, and merchants not bothering to support a new type of card that no consumer has. There’s a major chicken and egg problem to overcome there. | |
| ▲ | sghiassy 2 hours ago | parent | prev [-] | | Chinese competition can always be banned. Example: Chinese electric car competition | | |
| ▲ | sho_hn 2 hours ago | parent | next [-] | | That's what OP was saying, I think, noting that running them locally won't be a solution. | |
| ▲ | oblio an hour ago | parent | prev [-] | | Also Chinese smartphones. Huawei was about 12-18 months from becoming the biggest smartphone manufacturer in the world a few years ago. If it would have been allowed to sell its phones freely in the US I'm fairly sure Apple would have been closer to Nokia than to current day Apple. | | |
| ▲ | aurareturn an hour ago | parent [-] | | If Huawei was never banned from using TSMC, they'd likely have a real Nvidia competitor and may have surpassed Apple in mobile chip designs. They actually beat Apple A series to become the first phone to use the TSMC N7 node. |
|
|
|
| |
| ▲ | therealdeal2020 an hour ago | parent | prev | next [-] | | but you are assuming that the magical wizards are the only ones who can create powerful AIs... mind you these people have been born just few decades ago. Their knowledge will be transferred and it will only take a few more decades until anyone can train powerful AIs ... you can only sit on tech for so long before everyone knows how to do it | | |
| ▲ | cedws 40 minutes ago | parent [-] | | It's not a matter of knowledge, it's a matter of resources. It takes billions of dollars of hardware to train a SOTA LLM and it's increasing all the time. You cannot possibly hope to compete as an independent or small startup. | | |
| ▲ | block_dagger 4 minutes ago | parent [-] | | Presumably, the hardware to run this level of model will be democratized within the timeframe of the parent comment. |
|
| |
| ▲ | MattRix an hour ago | parent | prev [-] | | The thing is that the current models can ALREADY replicate most software-based products and services on the market. The open source models are not far behind. At a certain point I'm not sure it matters if the frontier models can do faster and better. I see how they're useful for really complex and cutting edge use cases, but that's not what most people are using them for. |
| |
| ▲ | guzfip 3 hours ago | parent | prev | next [-] | | > A jump that we will never be able to use since we're not part of the seemingly minimum 100 billion dollar company club as requirement to be allowed to use it. > They should just say they'll never release a model of this caliber to the public at this point and say out loud we'll only get gimped Duh, this was fucking obvious from the start. The only people saying otherwise were zealots who needed a quick line to dismiss legitimate concerns. | |
| ▲ | quotemstr 3 hours ago | parent | prev [-] | | This is why the EAs, and their almost comic-book-villain projects like "control AI dot com" cannot be allowed to win. One private company gatekeeping access to revolutionary technology is riskier than any consequence of the technology itself. | | |
| ▲ | scrawl an hour ago | parent | next [-] | | Having done a quick search of "control AI dot com", it seems their intent is educate lawmakers & government in order to aid development of a strong regulatory framework around frontier AI development. Not sure how this is consistent with "One private company gatekeeping access to revolutionary technology"? | | |
| ▲ | quotemstr an hour ago | parent [-] | | > strong regulatory framework around frontier AI development You have to decode feel-good words into the concrete policy. The EAs believe that the state should prohibit entities not aligned with their philosophy to develop AIs beyond a certain power level. |
| |
| ▲ | frozenseven 2 hours ago | parent | prev | next [-] | | Couldn't agree more. The "safest" AI company is actually the biggest liability. I hope other companies make a move soon. | |
| ▲ | FeepingCreature 2 hours ago | parent | prev [-] | | No it isn't lol. The consequence of the technology literally includes human extinction. I prefer 0 companies, but I'll take 1 over 5. |
|
|