| ▲ | qaid 4 hours ago |
| I was reading halfway thru and one line struck a nerve with me: > But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. So not today, but the door is open for this after AI systems have gathered enough "training data"? Then I re-read the previous paragraph and realized it's specifically only criticizing > AI-driven domestic mass surveillance And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War |
|
| ▲ | xeonmc 4 hours ago | parent | next [-] |
| > I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO... |
|
| ▲ | nubg 4 hours ago | parent | prev | next [-] |
| I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future. |
| |
| ▲ | taurath 12 minutes ago | parent | next [-] | | > It's not up to Dario to try to make absolute statements about the future. Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all. | |
| ▲ | m000 3 hours ago | parent | prev | next [-] | | How about the present and his personal beliefs? "I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries." This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies. | | |
| ▲ | anjellow 2 hours ago | parent | next [-] | | Some people can’t help themselves to read this like a Ouija board. | |
| ▲ | jacquesm an hour ago | parent | prev | next [-] | | That all works right up until the United States becomes autocratic and that process is well underway. So yes, the second part of your comment is what is going to come back to haunt them. The road to hell is paved with the best intentions. | |
| ▲ | estearum 2 hours ago | parent | prev [-] | | Western liberal ideals are better than the opposite. It is misanthropic to build autocratic societies. |
| |
| ▲ | andrewljohnson 36 minutes ago | parent | prev | next [-] | | This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity. | |
| ▲ | nhinck2 2 hours ago | parent | prev | next [-] | | He does it all the time. | |
| ▲ | camillomiller 3 hours ago | parent | prev | next [-] | | And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors | |
| ▲ | trvz 4 hours ago | parent | prev [-] | | He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him. | | |
|
|
| ▲ | ghshephard 4 hours ago | parent | prev | next [-] |
| I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed. |
| |
|
| ▲ | TaupeRanger 4 hours ago | parent | prev | next [-] |
| What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose? |
| |
| ▲ | crabmusket an hour ago | parent | next [-] | | > Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? Yes. Absolutely. | | |
| ▲ | raincole an hour ago | parent [-] | | And what? Get nationalized? Get labelled as terrorists? The US system doesn't empower a company to say no. It should though. | | |
| ▲ | aziaziazi 36 minutes ago | parent [-] | | You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones. You own nothing but your opinion. (No offense to personal property aficionados) |
|
| |
| ▲ | goatlover 4 hours ago | parent | prev | next [-] | | I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk. | | |
| ▲ | 3 hours ago | parent | next [-] | | [deleted] | |
| ▲ | lambdaphagy 3 hours ago | parent | prev | next [-] | | There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!) You can take issue with that argument if you want but it’s unconvincing not to address it. | | |
| ▲ | horacemorace 2 hours ago | parent | next [-] | | There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different. | | |
| ▲ | lambdaphagy an hour ago | parent [-] | | The guy who authorized the Manhattan project: - had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment - threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda - ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini - interned 120k people without due process, on the basis of ethnicity - turned a national party into a personal patronage system - threatened to override the legislature if it didn’t start passing laws he liked Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions. |
| |
| ▲ | idiotsecant 3 hours ago | parent | prev | next [-] | | That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason. | | |
| ▲ | lambdaphagy an hour ago | parent [-] | | I agree that the risk of an accidental strike was a huge problem with the theory of nuclear deterrence, but the question is: compared to what? In expectation or even in a 1st percentile scenario, was MAD worse than a world where the USSR is a unilateral nuclear power? For that matter, what would it have taken to get a stronger SALT treaty sooner? I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk? I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium. |
| |
| ▲ | estearum 2 hours ago | parent | prev [-] | | Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true. |
| |
| ▲ | michelsedgh 3 hours ago | parent | prev [-] | | So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?) | | |
| ▲ | estearum 2 hours ago | parent | next [-] | | With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though. | | |
| ▲ | michelsedgh 2 hours ago | parent [-] | | Its not just the atomic bomb im talking the usa had the best production of fighter jets, bombers, all kinds of communication technology, deciphering technology all the ammunition, all of those together beat the Nazis and they were trying their best to develop better and more advanced technologies than usa! | | |
| |
| ▲ | anonym29 3 hours ago | parent | prev | next [-] | | If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development? If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet? | | |
| ▲ | andsoitis 3 hours ago | parent [-] | | > If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development? No > If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet? The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs. |
| |
| ▲ | mothballed 3 hours ago | parent | prev [-] | | Did WMDs have a meaningful effect on stopping the Nazis? I thought the bomb wasn't dropped until after they surrendered. | | |
| ▲ | anonym29 3 hours ago | parent [-] | | The only two atomic weapons ever deployed weren't even targeting Nazi Germany, but Japan. Dark but true: they were both deliberately and knowingly targeted at civilian populations. | | |
| ▲ | cies 2 hours ago | parent [-] | | And inflicted less damage than the fire bombing campaigns on civ pop centers that were carried out along side the A-bombs. The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power. | | |
| ▲ | estearum 2 hours ago | parent [-] | | "Needed to win the war," no. The US could've continued to firebomb and then follow with a land invasion, which would've killed both more Japanese and more Allies. Was it the best path to end the war? Certainly. The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade. |
|
|
|
|
| |
| ▲ | archagon 3 hours ago | parent | prev [-] | | Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial? | | |
| ▲ | johnisgood 40 minutes ago | parent | next [-] | | Time to stop paying your taxes. :P | |
| ▲ | andsoitis 3 hours ago | parent | prev | next [-] | | > I absolutely don’t want tech companies to use the money I pay them to harm people. Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries. I am unaware of any tech company that directly does physical warfare on the battlefield against humans. | | |
| ▲ | tbossanova 34 minutes ago | parent [-] | | Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines | | |
| ▲ | andsoitis 22 minutes ago | parent [-] | | > making AI killing machines What’s an example of a company that’s making killing machines that a typical consumer or someone HN might be buying product or services from? |
|
| |
| ▲ | scottyah 3 hours ago | parent | prev [-] | | Because it's painfully short-sighted, or maliciously ignorant. | | |
| ▲ | archagon 3 hours ago | parent [-] | | No, it’s just that I don’t want the money I spend to have blood on it. Trivially simple. | | |
|
|
|
|
| ▲ | skeledrew 3 hours ago | parent | prev | next [-] |
| Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells. |
|
| ▲ | 01100011 29 minutes ago | parent | prev | next [-] |
| We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs. |
| |
| ▲ | kgwxd 13 minutes ago | parent [-] | | But then a person can be blamed for the outcome. We can't have that! |
|
|
| ▲ | altpaddle 4 hours ago | parent | prev | next [-] |
| Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon |
| |
| ▲ | not_the_fda 3 hours ago | parent | next [-] | | And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch. | | | |
| ▲ | levocardia 4 hours ago | parent | prev | next [-] | | Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now. | |
| ▲ | tempestn 3 hours ago | parent | prev | next [-] | | If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment. I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future. Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already. | | |
| ▲ | scottyah 3 hours ago | parent [-] | | Hah, I had the same realization about landmines. Along with the other commenter, really it would be better to add intelligence to these autonomous systems to limit the nastiness of the currently-deployed systems. If a landmine could distinguish between a real target and an innocent civilian 50yrs later, it's be a lot better. | | |
| ▲ | kgwxd 9 minutes ago | parent | next [-] | | It's weird that people still think that the people who's job it is to kill people, or make things that kill people, really care about people more than the killing part. They don't give a shit who blows up, as long as no one comes knocking on their door about it. | |
| ▲ | jacquesm an hour ago | parent | prev | next [-] | | Many landmines disarm after a while. | |
| ▲ | mothballed 3 hours ago | parent | prev [-] | | A landmine blowing up the enemy civilian 50 years later is probably seen as an advantage by the force deploying them. A bit like "salting the earth." | | |
|
| |
| ▲ | scottyah 3 hours ago | parent | prev [-] | | It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans. Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not. | | |
|
|
| ▲ | rafark 2 hours ago | parent | prev | next [-] |
| I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph |
|
| ▲ | orochimaaru 4 hours ago | parent | prev | next [-] |
| They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though. |
|
| ▲ | yujzgzc 3 hours ago | parent | prev | next [-] |
| > the door is open for this after AI systems have gathered enough "training data"? Sounds more like the door is open for this once reliability targets are met. I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick. |
|
| ▲ | urikaduri 3 hours ago | parent | prev | next [-] |
| The Ghandi of the corporate world is yet to be found |
| |
| ▲ | scottyah 3 hours ago | parent [-] | | Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics. |
|
|
| ▲ | mgraczyk 3 hours ago | parent | prev | next [-] |
| Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it |
| |
| ▲ | nextaccountic 3 hours ago | parent [-] | | If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not? Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead? (Note, I myself am not an US citizen) Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2] [1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n... [2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi... | | |
| ▲ | mgraczyk 2 hours ago | parent [-] | | This isn't about privacy rights, it's about war I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned | | |
| ▲ | nextaccountic 2 hours ago | parent [-] | | But.. the US doesn't perform mass surveillance on foreign people only when it's at war. It doesn't perform mass surveillance only on adversarial nations it potentially could be at war either. This absolutely is about privacy. > I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned Those foreign governments are spying on Americans and then sharing the results with the US government because the US government is misaligned with the interests of its own people |
|
|
|
|
| ▲ | nhinck2 2 hours ago | parent | prev | next [-] |
| > And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically. |
|
| ▲ | jamesmcq 3 hours ago | parent | prev [-] |
| So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months? Odd. |
| |
| ▲ | serf 3 hours ago | parent | next [-] | | do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance? a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment. | | |
| ▲ | jamesmcq 3 hours ago | parent [-] | | They’re not saying “AI can replace some menial white collar tasks”, they’re saying AI can replace all white-collar work. Yes, if you fuck up some white collar work, people will die. It’s irresponsible. | | |
| ▲ | NewsaHackO 3 hours ago | parent [-] | | >Yes, if you fuck up some white collar work, people will die. It’s irresponsible. A lot of the work in those sectors are not the ones that are being targeted for fully autonomous replacement. They likely would be in the future though. |
|
| |
| ▲ | howardYouGood 3 hours ago | parent | prev | next [-] | | [dead] | |
| ▲ | gedy 3 hours ago | parent | prev [-] | | Shh! there's a lot of money riding on this bet, ahem. |
|