Remix.run Logo
TaupeRanger 4 hours ago

What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?

crabmusket 2 hours ago | parent | next [-]

> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Absolutely.

raincole an hour ago | parent [-]

And what? Get nationalized? Get labelled as terrorists?

The US system doesn't empower a company to say no. It should though.

aziaziazi 37 minutes ago | parent [-]

You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.

You own nothing but your opinion. (No offense to personal property aficionados)

goatlover 4 hours ago | parent | prev | next [-]

I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.

3 hours ago | parent | next [-]
[deleted]
lambdaphagy 3 hours ago | parent | prev | next [-]

There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)

You can take issue with that argument if you want but it’s unconvincing not to address it.

horacemorace 2 hours ago | parent | next [-]

There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different.

lambdaphagy an hour ago | parent [-]

The guy who authorized the Manhattan project:

- had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment

- threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda

- ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini

- interned 120k people without due process, on the basis of ethnicity

- turned a national party into a personal patronage system

- threatened to override the legislature if it didn’t start passing laws he liked

Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.

idiotsecant 3 hours ago | parent | prev | next [-]

That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason.

lambdaphagy an hour ago | parent [-]

I agree that the risk of an accidental strike was a huge problem with the theory of nuclear deterrence, but the question is: compared to what? In expectation or even in a 1st percentile scenario, was MAD worse than a world where the USSR is a unilateral nuclear power? For that matter, what would it have taken to get a stronger SALT treaty sooner?

I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?

I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.

estearum 2 hours ago | parent | prev [-]

Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true.

michelsedgh 3 hours ago | parent | prev [-]

So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?)

estearum 2 hours ago | parent | next [-]

With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though.

michelsedgh 2 hours ago | parent [-]

Its not just the atomic bomb im talking the usa had the best production of fighter jets, bombers, all kinds of communication technology, deciphering technology all the ammunition, all of those together beat the Nazis and they were trying their best to develop better and more advanced technologies than usa!

an hour ago | parent [-]
[deleted]
anonym29 3 hours ago | parent | prev | next [-]

If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?

andsoitis 3 hours ago | parent [-]

> If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

No

> If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?

The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.

mothballed 3 hours ago | parent | prev [-]

Did WMDs have a meaningful effect on stopping the Nazis? I thought the bomb wasn't dropped until after they surrendered.

anonym29 3 hours ago | parent [-]

The only two atomic weapons ever deployed weren't even targeting Nazi Germany, but Japan. Dark but true: they were both deliberately and knowingly targeted at civilian populations.

cies 2 hours ago | parent [-]

And inflicted less damage than the fire bombing campaigns on civ pop centers that were carried out along side the A-bombs.

The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power.

estearum 2 hours ago | parent [-]

"Needed to win the war," no. The US could've continued to firebomb and then follow with a land invasion, which would've killed both more Japanese and more Allies.

Was it the best path to end the war? Certainly.

The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.

archagon 3 hours ago | parent | prev [-]

Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?

johnisgood 41 minutes ago | parent | next [-]

Time to stop paying your taxes. :P

andsoitis 3 hours ago | parent | prev | next [-]

> I absolutely don’t want tech companies to use the money I pay them to harm people.

Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.

I am unaware of any tech company that directly does physical warfare on the battlefield against humans.

tbossanova 35 minutes ago | parent [-]

Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines

andsoitis 23 minutes ago | parent [-]

> making AI killing machines

What’s an example of a company that’s making killing machines that a typical consumer or someone HN might be buying product or services from?

scottyah 3 hours ago | parent | prev [-]

Because it's painfully short-sighted, or maliciously ignorant.

archagon 3 hours ago | parent [-]

No, it’s just that I don’t want the money I spend to have blood on it. Trivially simple.

NewsaHackO 3 hours ago | parent [-]

What if I told you that it's way too late for that?