Remix.run Logo
SirensOfTitan 2 hours ago

Even if you don't care about the needless human suffering the US has caused from this operation, this conflict threatens global stability because of oil supply disruptions, and if the US keeps this up it could get quite bad very quickly.

I worked briefly in defense-tech and there is a huge blindspot in this field. While I worked with a ton of thoughtful, ethical, and talented people from the military, there is a veritable blind spot when it comes to support of the "warfighter." It is certainly noble and worthwhile work to protect soldiers from harm through technology, but I got some sense some people (actually especially the tech people who were never in the military) didn't think enough about the ethical concerns when dealing with people attached to the US's "enemies." And further, what about when the US itself is the aggressor? While active warfighters have to follow chain of command, companies can and should apply ethical constraints--but they often don't because DoD contracts are lucrative and (especially if you're not a prime) hard won.

I've had a lot of fun playing with Claude 4.6, but it is entirely unacceptable that this technology is being used in this conflict with Iran. I will cancel my account once this month's subscription is up in 2 weeks. The US is the aggressor here, that is certain. Support of this conflict as a private company that supposedly is oriented toward ethics is extremely illuminating.

Now with that, I have thought a tremendous amount about whether someone like Dario could even steer the ship away from support of a conflict like this at this point. We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training. There is certainly an argument to be made that if he did so, he might lose confidence of investors and lose control entirely. This begs the question: is shareholder/capital optimization the best way to organize our society?

skeledrew an hour ago | parent | next [-]

> We are all susceptible to market forces, and companies like Anthropic need as much revenue as possible to be able to maintain themselves and grow given the cost of training.

There's also the consideration that if they come across at too against US military support, the administration can and will make things extremely painful for them. I suspect they've actually gotten off pretty easy just being named a supply chain risk (so far). Imagine the backlash if they'd for example accepted contracts with China. Or even made so much as a hint that they weren't open to most military use cases.

SirensOfTitan an hour ago | parent [-]

As soon as you accept "we need to survive to do good," survival becomes the priority and the good becomes negotiable. And so every compromise reduces their ethical position a little more.

Living in accordance with an ethical framework only matters when that decision is hard. There are clearly consequences to doing so. But Anthropic has clearly forfeited their right to claim the moral high ground. Their posturing against OpenAI is based on a false dichotomy: they are arguing around a cutout incredibly minor commensurate with their broader exposure.

I think Anthropic can avoid contracting with the military at this stage, with all of their babbling about alignment, and not actively contract with China.

scarecrowbob 19 minutes ago | parent | prev [-]

I've found that reading odds and ends outside of my own academic, professional, or theoretical interests nets some interesting things sometimes.

At one point I got curious about how the US military thinks about insurgencies, so I read their manual on how to fight them. As someone holding a lot of dissident views in the US it was pretty interesting.

One thing I took away was the feeling that at no time did the manual ever define what an "insurgent" is, beyond whoever the US government tells them the insurgents are.

So you have as situation where, ultimately, there's no external reality testing, and reality is simply whatever "reality" is as defined by the command structure.

I know that sounds overly simple- of course military follows a chain of command, unquestionable right up to its civilian commander in chief.

Why I feel that is a useful observation is that, to your question, people are constantly deferring their ethical judgements. And I suspect there is some cognitive bias in play that allows folks to feel that deferral can't happen across all these systems.

In the case of businesses, it is to "the market"-- which is reactive and as such doesn't have "judgement", and even if it did it's needs aren't "human" so relying on it as a human seems dangerous. So to your question, my answer is usually "probably not". And further, unless people stop deferring their judgments to the imaginary of the spectacular market, eventually shits gonna break.

In the case of the military, we can see what happens when radically nihilistic (pedophilic and sociopathic media personalities) are put at the helm.

My larger point, though, is that our usual assumption seems to be that all these other folks are likely to exercise their faculties to test out reality and hopefully, when it doesn't line up with that reality, push back and prevent dumb shit from happening.

But all these systems are set up to prevent that from happening, it doesn't seem at all strange to me that these systems are starting to break in the ways that the seem to be failing.