| ▲ | unyttigfjelltol 3 hours ago |
| Techno futurist: 1. Builds tool extremely capable of mass surveillance and running autonomous warfighting capabilities. 2. Expresses shock — shock — when the Department of War insists on using the tool for mass surveillance and autonomous warfighting systems. |
|
| ▲ | Thrymr 2 hours ago | parent | next [-] |
| I don't doubt that Claude is capable of mass surveillance, but surely it is not too much of a stretch to say it may not be suitable for automated killbots? |
| |
| ▲ | godelski 2 hours ago | parent | next [-] | | We kill people based on metadata
- General Hayden
Former Director of NSA
Former Director of CIA
This goes far beyond metadata...[source] https://www.youtube.com/watch?v=tL8_caB35Pg | |
| ▲ | ozlikethewizard 2 hours ago | parent | prev | next [-] | | I assume the techs at the pentagon know that, and itd be more used for intelligence (Equally as worrying, because if theres one thing GPTs arent, its intelligent) | |
| ▲ | groby_b 2 hours ago | parent | prev [-] | | IDK, depends on how much you care about outcomes. I don't think Drunk Pete does, very much. |
|
|
| ▲ | diydsp 2 hours ago | parent | prev | next [-] |
| 1. The article points out Claude has resisted being trained for that. AI in general could, but Claude can not. |
|
| ▲ | spidersenses 2 hours ago | parent | prev | next [-] |
| Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus |
|
| ▲ | 2 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | EA-3167 2 hours ago | parent | prev [-] |
| Step 1.5 is also the one being ignored by 95% of comments here: the leverage the Pentagon is using is the lucrative contract Anthropic signed with them. The only threat here is Anthropic sucking up less money from the DoD. |
| |
| ▲ | unsnap_biceps 2 hours ago | parent | next [-] | | the article lists three things, two of which are concerning beyond just losing some money. Granted, I have no idea how realistic the later two are. These consequences are generally understood to be some mix of :
canceling the contract
using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
the nuclear option, designating Anthropic a “supply chain risk”. This would ban US companies that use Anthropic products from doing business with the military2. Since many companies do some business with the government, this would lock Anthropic out of large parts of the corporate world and be potentially fatal to their business3. The “supply chain risk” designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.
| |
| ▲ | Balinares 2 hours ago | parent | prev | next [-] | | It's been amazing watching them cosplay ethicality while twisting themselves into knots attempting to justify selling their service to Satan. Who could have predicted that Satan would turn around and screw them, outside of everyone ever. Maybe they should have asked a person instead of Claude. | | |
| ▲ | EA-3167 an hour ago | parent [-] | | Genuinely shocking. "We were totally fine with this being used to target people for surveillance and killing, but now you've crossed our arbitrary ethical fig-leaf so here's a big stink." I won't be surprised if they reach an eventual compromise that represents what the Pentagon wanted all along, while Anthropic can continue their chicken little act... all while building the very thing they claim to fear. |
| |
| ▲ | hoopleheaded 2 hours ago | parent | prev [-] | | Exactly - step 2 should be sign $200MM contract with party obviously and extremely interested in mass surveillance and autonomous warfighting capabilities. Then comes the shock. |
|