| ▲ | adamtaylor_13 3 hours ago | |
Writing out a thought I had, someone please critique my reasoning here... What if Anthropic just shrugged, dissolved the company and open-sourced all of the Opus weights? Could this harm OpenAI and advance AI in a reasonable way? Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be. | ||
| ▲ | nostrademons 2 hours ago | parent | next [-] | |
I kinda wonder if this is how we got DeepSeek. It was developed by a Chinese hedge fund. Entirely possible their business model was to take out large leveraged puts against the major U.S. AI vendors; shit on their business models with an entirely open-source model; and profit. The stock market certainly dropped in a massive way when DeepSeek was released, so if they traded against NVDA/GOOG/META et al, they profited in a big way. | ||
| ▲ | jrsj 2 hours ago | parent | prev | next [-] | |
They would never do this because the entire point of the company is to try and control what AI is allowed to do, who is allowed to use it, and what they’re allowed to do with it. The overarching philosophy of Anthropic is explicitly opposed to open models. If it were up to them it would be illegal to inference them in the U.S. | ||
| ▲ | stirlo 3 hours ago | parent | prev | next [-] | |
There’s plenty of markets outside the pentagon to sell to. Far more likely is they spin up a defence focused subsidiary with slightly different policies if they really want to sell to them. | ||
| ▲ | jdndbdjsj 2 hours ago | parent | prev | next [-] | |
If I were to download those weights I can't run them unless I spend $100k on a cluster, so the privacy advantage is not there yet. We already have Groq, Celebras and AWS Bedrock and others in the inference of open models space, so the model is usable that way. Is Claude better than Llama, Gwen etc. Probably. For now. But for how long? Dissolving means relying on Meta or Deepseek etc. to pick up and carry on tuning. Otherwise it'll be as useful as a GPT2 or Atari ST eventually in a competitive environment. Also open sourcing the weights is handing it over to DoD (aka DoW). Complicated question but probably not the best move. Keep going means keep working on safety research. | ||
| ▲ | mitthrowaway2 2 hours ago | parent | prev | next [-] | |
Then the Pentagon would freely use it for autonomous weapons, just like Anthropic doesn't want them to do. Next question? | ||
| ▲ | BoiledCabbage 3 hours ago | parent | prev | next [-] | |
> Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be. I mean what if all the employees stripped off their clothes and walked through the streets naked while barking, then called up their middle school math teachers and barked live dogs then moved to a commune and stood on their heads. > Writing out a thought I had, someone please critique my reasoning here... I mean to critique your reasoning, it makes sense to also include a criteria of something they might reasonably do. There are an infinite number of unhinged things a group of people could in theory do. But maybe start with something they would actually have an incentive to do. Why would they voluntarily dissolve their company, put themselves out of work, release their crown jewels and get nothing for it? Yes it's unhinged but unless I'm missing something bug, they wouldn't do that because they wouldn't at all want that to happen. | ||
| ▲ | xpe 2 hours ago | parent | prev [-] | |
> I'm just curious what the most unhinged response to this might be. Are you asking how dangerous open-weight models are? You could start with: Ryan Greenblatt on the AI Alignment Forum : "When is it important that open-weight models aren't released?" https://www.alignmentforum.org/posts/TeF8Az2EiWenR9APF/when-... From the Centre for Future Generations : "Can open-weight models ever be safe?" https://cfg.eu/can-open-weight-models-ever-be-safe/ From OpenAI authors, far from neutral : "Estimating Worst-Case Frontier Risks of Open-Weight LLMs" https://arxiv.org/abs/2508.03153 | ||