| ▲ | serguzest 2 hours ago | |
AI target-selection systems have become a loophole that removes the link between the decision-maker (who should bear responsibility within the military bureaucracy) and the actual action taken. Israel became the implementer of this model in Gaza (Palantir was most likely part of this system as well). Let us recall what former Israeli Chief of Staff Herzi Halevi reportedly conveyed about a meeting with Netanyahu. The IDF said they had struck 1,400 targets, yet Netanyahu reportedly slammed the table and angrily asked why it wasn’t 5,000, and said “bomb everywhere and destroy the houses.” For the military bureaucracy, the fact that AI can speculate or generate potential targets (which is entirely possible with LLM systems) becomes a convenient mechanism that, at least on paper, allows them to distance themselves from responsibility. Now let’s look at the statements made by Anthropic and Hegseth: https://www.anthropic.com/news/where-stand-department-war https://x.com/SecWar/status/2027507717469049070 From Anthropic’s own statement, we hear that they have actually been quite closely partnered. In Hegseth’s tweet we see: “Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.” This shows that Anthropic is still currently being actively used by the Department of War. My view is that Anthropic and its investors eventually realized that the American war machine will use their technology in reckless ways, and that this will certainly create a massive PR disaster or, in an ideal world, even legal consequences. That realization likely pushed them to adopt what they now frame as a more “humanitarian” position. We have already seen incidents where roughly 180 children were killed due to faulty targeting, assuming and hoping it was not intentional. | ||