| ▲ | mrandish 8 hours ago |
| When @sama announced within hours that OAI was replacing Anthropic with the "same conditions ", it was clear that either the DoW or OAI (or both) were fudging. DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable. And sure enough, my reading of it left the impression the OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself." |
|
| ▲ | _heimdall 7 hours ago | parent | next [-] |
| I'd have money on OpenAI hiding behind the "all lawful use" phrasing to claim high levels of protection. He also claimed that they would build rules into the model the DoD would use, preventing misuse. Aka he claims OpenAI will quickly solve alignment and build it right in...I wouldn't hold my breath. |
| |
| ▲ | conception 6 hours ago | parent | next [-] | | All lawful use. And then they followed up with “intentionally doing illegal things.” If they happen to accidentally do illegal things, OpenAI is ok with it. | | |
| ▲ | aardvarkr 6 hours ago | parent [-] | | I hate this so much. The nsa’s spying on everyone in 2010 was “legal” and I can only imagine how much worse it is now with AI to follow your digital footprint around everywhere. Too bad we don’t have any more whistleblowers like Snowden | | |
| ▲ | lukan an hour ago | parent | next [-] | | "Too bad we don’t have any more whistleblowers like Snowden" Probably because most don't want to end up in russia? | |
| ▲ | belter 5 hours ago | parent | prev [-] | | [dead] |
|
| |
| ▲ | thisisit 5 hours ago | parent | prev | next [-] | | Most likely scenario is that if it does something “unlawful” and found out - claim that “These machines are black boxes and they don’t know what went wrong. They will set up an investigative committee and find out.” | | |
| ▲ | nso 4 hours ago | parent | next [-] | | * spawn 8 investigative agents | |
| ▲ | genxy 5 hours ago | parent | prev | next [-] | | When shit hits the fan they are going to blame AI, but then not even use hand sanitizer. They will 100% be using OAI as a scapegoat, although I'd like to see the OAI goat stay and someone else run into the woods. All Lawful Use is a tautology with fascists because they cannot break laws by definition. | | |
| ▲ | delaminator 2 hours ago | parent [-] | | Yeah, here's some examples of all these fascists doing exactly that: Soviet Union - The show trials of the 1930s were conducted with full legal apparatus: confessions, judges, verdicts. Stalin's purges operated through legally constituted troikas. Entirely "lawful" by Soviet law. East Germany (DDR) - The Stasi's surveillance and harassment programmes were codified in law. When the wall fell, many Stasi officers genuinely argued their conduct was legal under GDR statute: a defence that West German courts largely rejected. Castro's Cuba - Mass executions after the revolution were conducted by legally constituted revolutionary tribunals. Castro explicitly defended this on legality grounds when challenged by foreign press in 1959. Chavez/Maduro's Venezuela - Suppression of opposition media, jailing of political opponents was consistently defended as operating within Venezuelan law, which was progressively rewritten to make it so. Classic self-referential legality. Mao's Cultural Revolution - The revolutionary committees had legal standing. Persecution of intellectuals and landlords proceeded through formal (if kangaroo) legal processes. |
| |
| ▲ | n6hdhf 3 hours ago | parent | prev [-] | | More like they will feed machine bullshit like WMDs exist in Fiji. My gut says so. My mom always believes me. Machine will call it out. Then they want overide. Machine will log it. Then they want an erase log button etc. Institutions and rules didnt fall from the sky. It evolved to damp the damage caused by such behavior. |
| |
| ▲ | SoftTalker 5 hours ago | parent | prev [-] | | OpenAI: Is that... legal? DoD: I will make it legal. |
|
|
| ▲ | JumpCrisscross 5 hours ago | parent | prev | next [-] |
| For consumer ChatGPT accounts, go to their privacy portal [1] and, first, delete your GPTs, and then, second, delete your account. [1] https://privacy.openai.com/policies?modal=take-control |
| |
| ▲ | Towaway69 5 hours ago | parent | next [-] | | How do I cancel my subscription to the DoW? The bigger picture is that the DoW got what it wanted and it got it by threatening one company while the other did its bidding. | | |
| ▲ | delaminator an hour ago | parent | next [-] | | Here's a simple ubsubscribe guide https://usa.gov/renounce-lose-citizenship | |
| ▲ | davidw 4 hours ago | parent | prev [-] | | By voting. | | |
| ▲ | don_esteban 4 hours ago | parent | next [-] | | Did the nsa's spying on everyone change between democratic and republican governments? | | |
| ▲ | ori_b 4 hours ago | parent | next [-] | | Did you vote in the primaries for a candidate that might change it? | | |
| ▲ | don_esteban 2 hours ago | parent | next [-] | | Did democrats offer primaries in the last elections? Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything? I wonder how long can the American public keep the self delusion that the elections are anything but a theater for the naive, to keep the pretense the public has any say in things that matter. How much has the current administration asked the public about going to war with Iran? | | |
| ▲ | wasabi991011 2 hours ago | parent | next [-] | | > Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything? He didn't win the primaries though. It would have amounted to something if he got enough votes. | | |
| ▲ | don_esteban an hour ago | parent [-] | | 1) He did not win primaries, in significant part also because DNC was heavily against him. The level playing field thing. 2) If he won the primaries, there is still no guarantee that that would have amounted to anything. First, he might not have won the elections (mainstream media and the whole ruling elites were heavily against him). And even if he won, he might not have been able to do much against the permanent state. I still think the main cause of Trump's wins is the deep disillusionment of the democratic voters by Obama's failure (inability/unwillingness) to impact a meaningful change. | | |
| ▲ | wilg an hour ago | parent [-] | | Everything you're saying here is the exact delusional cynicism that got us here. Stop. | | |
| ▲ | don_esteban 40 minutes ago | parent [-] | | Yes, my stance is cynical. Sadly, it is also factually correct (i.e. not delusional). Which of my statements are you contesting? From my point of view, your stance (play fairly, according to the rules set by your stronger opponent) is delusional. Note that the opponent is not 'republicans', but the whole ruling elites. And no, I can't help you, I am not USian, just an outside observer. Sadly, due to its weight, whatever USA does, heavily influences everybody else as well. |
|
|
| |
| ▲ | wilg 2 hours ago | parent | prev [-] | | https://en.wikipedia.org/wiki/2020_Democratic_Party_presiden... https://en.wikipedia.org/wiki/2024_Democratic_Party_presiden... Skill issue. Run your candidate. Convince people to vote for them. > How much has the current administration asked the public about going to war with Iran? THE ELECTIONS are how the public weighs in. | | |
| ▲ | don_esteban 2 hours ago | parent [-] | | Re: Skill issue
Money issue. This is not level playing ground, the field is severely tilted.
The referee is bought. But you are saying: You lost fair and square, wait 4 years to have any say in what is going on. Re: THE ELECTIONS are how the public weighs in. When the choice is between Tweedledee and Tweedledum, the public's choice is meaningless. To say nothing about politicians outright shamelessly lying (e.g. Trump campaigning on 'no more wars'). | | |
| ▲ | wilg 2 hours ago | parent [-] | | Money issue is also a skill issue, but I have no doubt in the era of free media someone could figure it out. Sorry I didn't invent the idea that there are federal elections every two years, I'm just telling you that you have to win them. Bonus points: this is also how you can change the election schedule or political system! If you're saying both candidates were bad when one was Trump, and the other was Hillary, Kamala, or Joe, then you don't have very good judgement. I agree Trump lying about not starting a war was bad. Many of us have said for years that he is a terrible liar. Please help us. | | |
| ▲ | don_esteban 44 minutes ago | parent [-] | | I agree that Clinton/Harris/Biden are not equally bad as Trump. Trump is monstrously bad (= force the shit hitting the fan NOW), the democratic alternatives were just 'normally' bad (= continue the same old crap driving the shit closer to the fan, ignoring the looming disaster). |
|
|
|
| |
| ▲ | jrflowers 3 hours ago | parent | prev [-] | | I like that I can’t tell if this is some sort of admonition for not voting centrist enough in a primary that didn’t happen or for not voting left enough in a primary that did not happen. It seems like if you’re going to be so bold as to do a callout you might as well say what for (and why you either picked or specifically skipped a primary that did not happen) |
| |
| ▲ | vkou an hour ago | parent | prev [-] | | No. ... But the government flooding cities with thousands of masked thugs with a license to do whatever they want... has so far been an entirely Republican thing. There are more colours to the world than pure black and pure white. There are also a million shades of grey in between, and most of us have the ability to distinguish between them. |
| |
| ▲ | raincole 2 hours ago | parent | prev [-] | | Voting changes the name of the department. It doesn't change if the government wants mass surveillance. See PRISM. |
|
| |
| ▲ | teruakohatu 5 hours ago | parent | prev [-] | | Why? If you have so little faith in them that they won’t honour the privacy controls you should also delete your non-consumer account too. |
|
|
| ▲ | oxdgd38 7 hours ago | parent | prev | next [-] |
| We know how this story will end for Dario. See Oppenheimer, Turing, Lavoisier, Galileo, Socretes etc. Power does not reside in the hands of people with knowledge or even wealth.
And most technical people have not taken a political philosophy course or even a philosphy course. The Ring of Gyges story is 4000 years old. |
| |
| ▲ | tmule 6 hours ago | parent | next [-] | | Oppenheimer? Really? Quoting a review of an Oppenheimer biography: “Oppenheimer was clearly an enormously charming man, but also a manipulative man and one who made enemies he need not have made. The really horrible things Oppenheimer did as a young man – placing a poisoned apple on the desk of his advisor at Cambridge, attempting to strangle his best friend – and yes, he really did those things – Monk passes off as the result of temporary insanity, a profound but passing psychological disturbance. (There’s no real attempt by Monk to explain Oppenheimer’s attempt to get Linus Pauling’s wife Ava to run off to Mexico with him, which ended the possibility of collaboration with one of the greatest scientists of the twentieth, or any, century.) Certainly the youthful Oppenheimer did go through a period of serious mental illness; but the desire to get his own way, and feelings of enormous frustration with people who prevented him from getting his own way, seem to have been part of his character throughout his life.” Seems more like Sam Altman, who is known to get his way, than Dario. | | |
| ▲ | toraway 3 hours ago | parent | next [-] | | The source for the poisoned apple story is Oppenheimer himself, and otherwise uncorroborated to be clear. He spent his life clearly racked by feelings of inadequacy, guilt and self-doubt. When combined with a somewhat paradoxical large ego and occasionally fanciful reshaping of his own life story or exaggeration, it's entirely plausible (if not likely) that this was in reality a brief intrusive thought or a partially realized fantasy blown up into a catchy anecdote that better fit his self-image of being unable to control his typically human qualities of anger and envy. If it was Sam Altman, we'd have heard the story from the guy he tried to poison, who instead of filing a police report thought it showed Sam was a real go-getter and offered him his first job on the spot as VP at the company he founded (later forced out by Sam replacing him as CEO, but still considers him a friend with no hard feelings). | |
| ▲ | CamperBob2 4 hours ago | parent | prev | next [-] | | The idea isn't that Oppenheimer was a saint, but that the government he served well and faithfully -- at the expense of his soul, some would argue -- turned on him viciously as soon as he dared to question their agenda. As you suggest, it is easy to imagine Altman in the same hot seat. Never mind his sexual orientation, which the Republican theocrats will eventually use against him as surely as the knives came out for Ernst Röhm. | |
| ▲ | an hour ago | parent | prev [-] | | [deleted] |
| |
| ▲ | adriand 6 hours ago | parent | prev | next [-] | | I think Amodei is widely underestimated. The consensus viewpoint on the deal that OpenAI struck with the Pentagon is that Anthropic got played. I disagree. I'm certain that Amodei and his team gamed this out. In doing so, I think there's at least two conclusions they would have drawn: 1. Some other AI company would cut a deal with the Pentagon. There's no world in which all the labs boycott the Pentagon. So who? Choosing Grok would be bad for the US, which is a bad outcome, but Amodei would have discounted that option, because he knows that despite their moral failures, the Pentagon is not stupid and Grok sucks. That leaves Gemini or OpenAI, and I bet they predicted it would be OpenAI. Choosing OpenAI does not harm the republic - say what you will about Altman, ChatGPT is not toxic and it is capable - but it does have the potential to harm OpenAI, which is my second point: 2. OpenAI may benefit from this in the short term, and Anthropic may likewise be harmed in the short term, but what about the long game? Here, the strategic benefits to Anthropic in both distancing themselves from the Trump administration and letting OpenAI sully themselves with this association are readily apparent. This is true from a talent retention and attraction standpoint and especially true from a marketing standpoint. Claude has long had much less market share than ChatGPT. In that position, there are plenty of strategic reasons to take a moral/ethical stand like this. What I did not expect, and I would guess Amodei did not either, is that Claude would now be #1 in the app store. The benefits from this stance look to be materializing much more quickly than anyone in favour of his courage might have hoped. | | |
| ▲ | hedora 6 hours ago | parent | next [-] | | > Choosing Grok would be bad for the US They chose Grok and OpenAI. The story was drowned out by the Anthropic controversy, but an xAI deal was signed the same week. | | |
| ▲ | dolphinscorpion 5 hours ago | parent | next [-] | | Grok is chosen because Musk spent $250+ million to elect Trump and is expected to underwrite the 2026 elections. Also, a lot of Trumps and their friends are invested in SpaceX. So they give them money too, but use OpenAI or Claude. I have a feeling that the military likes Claude more | |
| ▲ | xvector 5 hours ago | parent | prev [-] | | They "chose Grok" for political optics, but they don't seriously intend to use it because it's actually just benchmaxxed garbage - hence why they worked with OpenAI. |
| |
| ▲ | oxdgd38 6 hours ago | parent | prev | next [-] | | The mistake here is thinking they can take on Power without really sitting in any officual position of Power. Wikileaks and Assange got popular too. What happened to them? The State Dept and CIA do exactly what Assange did. They pick and choose who to target with leaks. They get away with it (mostly even when exposed) because they officially are in power. Assange was not in power.
If you take a moral position do it when you have real power. | | |
| ▲ | generic92034 an hour ago | parent [-] | | > If you take a moral position do it when you have real power. If the condition for getting real power is having no morals, this is hard to accomplish. |
| |
| ▲ | panta 3 hours ago | parent | prev | next [-] | | > Choosing OpenAI does not harm the republic if we consider AIs as "force multipliers" as we do with coding agents, it's easy to see how any AI company can harm the republic if the government they are serving is unethical and amoral. | |
| ▲ | derwiki 6 hours ago | parent | prev | next [-] | | Lyft was briefly number one ahead of Uber, too | |
| ▲ | xvector 5 hours ago | parent | prev | next [-] | | There is also: 3. Talent migration to Anthropic. No serious researcher working towards AGI will want it to be in the hands of OpenAI anymore. They are all asking themselves: "do I trust Sam or Dario more with AGI/ASI?" and are finding the former lacking. It is already telling that Anthropic's models outperform OAI's with half the headcount and a fraction of the funding. | |
| ▲ | techpression 5 hours ago | parent | prev [-] | | They still need a lot of money and what their VC’s think is going to be more important than what Amedei does. Nothing more profitable than war and government. App Store rankings are meaningless, I have Claude, ChatGPT and Gemini all in top five, with a electronic mail app being 1 and a postal tracking service app (for a very small provider) being 3. | | |
| ▲ | internet101010 4 hours ago | parent [-] | | The value of hyperscalers' equity in Anthropic alone dwarfs their contracts with the government. Not to mention the revenue from hosting their models that helps justify the insane capex. Anthropic going to $0 would be a huge hair cut to all of their balance sheets. | | |
| ▲ | techpression 3 hours ago | parent [-] | | They’ve only invested a couple of billions, like 20 or so split between them. Not really something that hurts them long or even medium term.
Microsoft has multiple multi billion dollar government deals, I think Amazon is the only that doesn’t, Google also has a lot of government contracts, especially outside of cloud. |
|
|
| |
| ▲ | beepbooptheory 7 hours ago | parent | prev [-] | | I do not believe the Ring of Gyges preceded Plato making it up for The Republic... Where are you getting 4000 years? Also maybe not seeing the message or connection here... That myth isn't really about who has power or not, right? It's kind of just a trite little "why you should do good even when no one is watching" thing. It just serves Socrates for his argument with Thrasymachus, and leads us into book 2 where it really gets going with Glaucon and all that. This is from memory so I might be a little off. | | |
| ▲ | oxdgd38 7 hours ago | parent [-] | | I got it from Tamar Gendlers philosophy and human nature course on open yale courses. She says it was a popular folk story passed down orally much before it was written in a book. Plato used it because people grew up hearing the story. The story is asking whats the source of morality? Who decides where the lines are? And its not scientists. Science produces the Ring. | | |
| ▲ | beepbooptheory 5 hours ago | parent [-] | | I was wrong, it's in Book II. This is "Socratic irony", its Glaucon speaking, assuming the position of an argument from earlier. Socrates himself of course doesn't believe in this conclusion... we are going to learn later that justice is a form, based on the Good! This is all the doxa of one still in the cave. > According to the tradition, Gyges was a shepherd in the service of the king of Lydia; there was a great storm, and an earthquake made an opening in the earth at the place where he was feeding his flock. Amazed at the sight, he descended into the opening, where, among other marvels, he beheld a hollow brazen horse, having doors, at which he stooping and looking in saw a dead body of stature, as appeared to him, more than human, and having nothing on but a gold ring; this he took from the finger of the dead and reascended. Now the shepherds met together, according to custom, that they might send their monthly report about the flocks to the king; into their assembly he came having the ring on his finger, and as he was sitting among them he chanced to turn the collet of the ring inside his hand, when instantly he became invisible to the rest of the company and they began to speak of him as if he were no longer present. He was astonished at this, and again touching the ring he turned the collet outwards and reappeared; he made several trials of the ring, and always with the same result—when he turned the collet inwards he became invisible, when outwards he reappeared. Whereupon he contrived to be chosen one of the messengers who were sent to the court; whereas soon as he arrived he seduced the queen, and with her help conspired against the king and slew him, and took the kingdom. Suppose now that there were two such magic rings, and the just put on one of them and the unjust the other; no man can be imagined to be of such an iron nature that he would stand fast in justice. No man would keep his hands off what was not his own when he could safely take what he liked out of the market, or go into houses and lie with any one at his pleasure, or kill or release from prison whom he would, and in all respects be like a God among men. Then the actions of the just would be as the actions of the unjust; they would both come at last to the same point. And this we may truly affirm to be a great proof that a man is just, not willingly or because he thinks that justice is any good to him individually, but of necessity, for wherever any one thinks that he can safely be unjust, there he is unjust. https://gutenberg.org/cache/epub/1497/pg1497.txt |
|
|
|
|
| ▲ | hn_throwaway_99 3 hours ago | parent | prev | next [-] |
| Agree with this completely. But besides Sam Altman, this whole episode has made me totally and completely lose all respect for Paul Graham. I used to really idolize pg, and I really used to like his essays, but over the years I've found his essays increasingly displayed a disturbing lack of introspection, like they'd always seem to say that starting a startup is the best thing anyone can do, and if you're not good at startups then you kind of suck. But his continued support of Altman in this instance (see https://x.com/paulg/status/2027908286146875591, and the comment in that thread where he replies "yes") is just so extra disappointing and baffling. First, his big commendation for Altman is that he's doing an AMA? Give me an f'ing break. When someone is a great spin doctor I'm not going to commend them for doing more spinning. It's like he has total blinders on and is unwilling to see how sama's actions in this instance are so disgusting and duplicitous. Maybe subconsciously he knows he's responsible for really launching sama into the public consciousness, so he now just is incapable of seeing the undeniably shitty things sama has done. Oh well, I guess it's just another tech leader from the late 90s/early 00s who has just shown me he's kind of a shitty person like a lot of us. |
|
| ▲ | sakesun 8 hours ago | parent | prev | next [-] |
| > it was clear that either the DoW or OAI (or both) were fudging. This is my first thought as well. It's too obvious. He should have consulted ChatGPT before the announcement. |
| |
| ▲ | shigawire 5 hours ago | parent [-] | | More likely assumed (perhaps rightfully) that there would be no consequences anyway. |
|
|
| ▲ | LarsDu88 5 hours ago | parent | prev | next [-] |
| Greg Brockman donated 25 million dollars, and DoW gives OpenAI 200 million dollar contract. Just good 'ol fashion grifting mixed with a bit of government corruption. This country has been boiling the frog of graft, grifting, and corruption too long. |
|
| ▲ | cobbzilla 4 hours ago | parent | prev | next [-] |
| per other Snowden comments, “all lawful use” means whatever we want it to mean. Secret FISA court decisions will say the use is lawful, but you’ll never get to read or challenge those decisions. |
|
| ▲ | fmajid 6 hours ago | parent | prev | next [-] |
| Or, as is likely, OpenAI models have no guardrails, Anthropic's did and the DoD was bumping into them. |
| |
| ▲ | galangalalgol 5 hours ago | parent [-] | | Does anyone else notice claude is just plain better at reasoning? It may not just be post training guardrails. It would not surprise me of it was something anthropic couldn't simply disable. Either from reinforcement or even training corpus curation. Of all the models, claude is the only one that makes me wonder if they have figured out something beyond stochastic language generation and aren't telling anyone | | |
| ▲ | solenoid0937 5 hours ago | parent [-] | | I have noticed this too, despite the close benchmark results Claude just works better. It knows when to push back, it has an "agency"... there is something there that I don't see with Gemini or OpenAI's best paid models. |
|
|
|
| ▲ | cheema33 8 hours ago | parent | prev | next [-] |
| > OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself." I believe this understanding is correct. The issue many people have these days with Dept. of War, and most of Trump admin is that they have little respect for laws. They only follow the ones they like and openly ignore the ones that are inconvenient. Dept of "War" should have zero problems agreeing to the two conditions Anthropic outlined, if they were honest brokers. But I think most of us know that they are not. Calling them dishonest brokers seems very charitable. |
| |
| ▲ | aardvarkr 6 hours ago | parent | next [-] | | I don’t care who is in the whitehouse. Snowden revealed the crimes of the NSA in 2013 when Obama was president. They’re all going to want to use AI for mass surveillance | | |
| ▲ | Tanjreeve 2 hours ago | parent [-] | | AI doesn't add anything to the ability to do mass surveillance. That genie was already out of the bottle from clouds and big data systems. At best AI might take on some of the gruntwork for drawing conclusions from profiles but it's doing it's usual thing of being a powerful interface built on top of other systems. |
| |
| ▲ | reactordev 8 hours ago | parent | prev | next [-] | | I haven’t seen them follow a law yet | |
| ▲ | lmeyerov 7 hours ago | parent | prev | next [-] | | I find it confusing in most directions. Ex: For the above statement, if they're truly dishonest brokers and openly ignore the rules that are inconvenient, they would have zero problems agreeing to Anthropic's terms and then violating them. So what you say may be quite true, but there would still need to be more to the story for it to make sense. Ex: DoW officials are stating that they were shocked that their vendor checked in on whether signed contractual safety terms were violated: They require a vendor who won't do such a check. But that opens up other confusing oversight questions, eg, instead of a backchannel check, would they have preferred straight to the IG? Or the IG more aggressively checking these things unasked so vendors don't? It's hard to imagine such an important and publicly visible negotiation being driven by internal regulatory politicking. I wonder if there's a straighter line for all these things. Irrespective of whether folks like or dislike the administration, they love hardball negotiations and to make money. So as with most things in business and government, follow the money... | | |
| ▲ | 3eb7988a1663 6 hours ago | parent [-] | | I have no idea what exactly Anthropic was offering the DoD, but if there were a LLM product, possible that the existing guardrails prevented the model from executing on the DoD vision. "Find all of the terrorists in this photo", "Which targets should I bomb first?" Even if the DoD wanted to ignore the legal terms, the model itself would not cooperate. DoD required a specially trained product without limitations. |
| |
| ▲ | ExoticPearTree 6 hours ago | parent | prev [-] | | [flagged] | | |
| ▲ | sfink 5 hours ago | parent | next [-] | | There's a reason it's unpopular. If your company makes an herbicide that happens to be very good at killing off anyone who drinks it at a high concentration in their water supply, you're saying that there should be no way for your company to resist being used for mass murder (including unavoidable collateral damage)? Also, the core mission of the military is not "killing its adversaries through any means necessary". It is to defend state interests. Some people have a belief that mass killing is the best mechanism for accomplishing that. I do not agree with, nor do I want to associate with, those people. They are morally and objectively wrong. Yes, sometimes killing people is the most effective -- or more likely, the quickest -- way. In practice, it doesn't work very well. The threat of violence is much more powerful than actually committing violence. If you have to resort to the latter, you've usually screwed up and lost the chance to achieve the optimal outcome. It is true that having no restrictions whatsoever on your ability to commit violence is going to be more intimidating, but it also means that you have to maintain that threat constantly for everyone, because nobody has any other reason to give you what you want. The actual military is not evil. Your conception of it is. | | |
| ▲ | palmotea 5 hours ago | parent | next [-] | | >> Unpopular opinion around here, but no company should have the ability to stop the military from its core mission: killing its adevarsaries through any means necessary. > The actual military is not evil. Your conception of it is. You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government? To give different hypothetical example: should Microsoft be allowed to put terms in its Windows contracts with the government, stipulating that Windows cannot be used to create or enforce certain tax policy or regulations that Microsoft disagrees with? Windows is all over, and I'm sure pretty much every government process touches Windows at some point, so such a term would have a lot of power. | | |
| ▲ | sfink 4 hours ago | parent | next [-] | | > You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government? I don't think "control or veto" is fair. Anthropic is not trying to prevent the US government from creating full autonomous killbots based on inadequate technology. They are only using contract law to prevent their own stuff from being used in that way. But that aside, my opinion is that to a first order approximation, yes a company should very much be able to have say in its contract negotiations with any party including the government. It's very similar to the draft. I don't believe a draft is ethical until the situation is extreme, and there ought to be tight controls on what it takes to declare the situation to be that extreme. At any other time, nobody should be forced to join the military and shoot people, and corporations (that are made of people) should not be forced to have their product used for shooting people. A corporation is a legal fiction to describe a group of people. Some restrictions can be placed on corporations in exchange for the benefits that come from that legal fiction, but nothing that overrides the rights of its constituent people. Governments are made of people too. Again, a subset of people are given some powers in order to better achieve the will of the people, but with tight controls on those powers to keep the divergence to a minimum. (Of course, people will always find the cracks and loopholes and break out of their constraints, but I'm talking about design not real-world implementation here.) So to look at your hypothetical, first I'd say it's not very different from the question of whether an individual person should be forced to personally enforce tax policy. Normally, I'd say no. There are many situations where the government needs more say and authority in such things, but that must only be achieved via representatives of the people passing laws to allow such authority. Other than that, yes: I believe a company should be able to negotiate whatever contract terms it wants. In a democracy, we are not subjects of a controlling government; the government is an extension of us. In practical terms, if Microsoft were to insist on that contract stipulation, the government would not agree to the contract and would award its business to someone else. If the government were especially out of control and/or unethical, it might punish Microsoft with regulations or declarations of supply chain risk or whatever, but that is clearly overstepping its bounds and ought to be considered illegal if it isn't already. The usual fallback would be that the people would throw the people perpetrating that out on their asses. That's the "democratically-elected part". Obviously, Microsoft would be stupid to insist on such a thing in their contract, and its employees would probably lose all confidence in the corporate leadership. Most likely, they'd leave and start Muckrosaft next door that rapidly develops a similar product and sells it to the government under a reasonable contract. Basically, I'm always going to start from people first, and use organizations and laws only in order to achieve the will of the people. The fact that the people are stupid does make that harder, but the whole point of democracy is that we'll work out the right balance over time. | |
| ▲ | 4 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | ExoticPearTree 4 hours ago | parent | prev [-] | | My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win it. And we would all go through military training just in case, you know, a neighbor drank too much last night and thinks it can win against you. > The threat of violence is much more powerful than actually committing violence. While I agree with this statement, the only way the threat works is if from time to time you apply violence to reinforce your capability and availability to actually do it. And the US is really good at actually being violent so others don't even think about doing something against it, at least the majority of countries anyway. | | |
| ▲ | don_esteban 4 hours ago | parent [-] | | Re: My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win it Now apply the same logic to the current Iran war. | | |
| ▲ | ExoticPearTree 2 hours ago | parent [-] | | I do not see Iran winning this. The current government is also hated by the people who would very much like to see all of them dead. Al Jazeera has some very good insights into this, and the gist of it is: the Iranian regime is in a fight for its life with nothing to lose. If they are degraded enough, a revolution will start in Iran and they will be killed by the people. Or by US/IL bombs - whichever comes first. There is no way they get out of this alive. They are trying to prolong the inevitable. | | |
| ▲ | don_esteban 2 hours ago | parent | next [-] | | Regarding Iran's future: You are describing Libya scenario, not a 'lived prosperously ever after'. There is no credible opposition in Iran to take the mantle. | | |
| ▲ | ExoticPearTree 6 minutes ago | parent [-] | | No. Iran has almost all of its population part of the same ethnic group, which in Libya it was not true: all the tribes started fighting each other. It does not an established opposition because the current regime has the habit of killing anyone it doesn't like or goes against the official line. Now there is a chance for opposition to form. |
| |
| ▲ | don_esteban 2 hours ago | parent | prev | next [-] | | OK, slowly: The wars are already total for the weaker sides. See Ukraine/Iran.
Did not stop the stronger side attacking. You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want? But yes, you are right, the world would be much simpler in such case - there will be no humans left. OK, maybe some hunter-gatherers. | | |
| ▲ | thaumasiotes 2 hours ago | parent [-] | | > You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want? Taken literally, it means genocide of the losers is an option the winning side has. It always has been. Note that Genghis Khan's explicit plan when he conquered China was to wipe out the Chinese to make room for Mongols. He wasn't stopped from doing that; there was no constraint to block him. But he was persuaded not to. |
| |
| ▲ | Tanjreeve 2 hours ago | parent | prev [-] | | This is the same mistake as made in Iraq and Syria by media policy pundits. Dictatorial regimes collapse pretty quickly without a significant base of support enough to stop a revolution happening. They might not have a majority of people supporting but it isn't a democracy. Dictatorial regimes will always have one or more of military, business, or sub-groups of citizens in their pockets as clients. Whenever we say "the regime is hated by it's people it will collapse" it should be asked "then why didn't it collapse already?". In Iran metropolitan areas are where you see opposition. That's also where people have cameras and media orgs tend to be. We get a warped depiction of opposition in Iran even without our own media's baggage. Meanwhile the power base of Iran is everywhere but metropolitan cities. And there's a lot of clients who benefit from the regime. I think this might be worse than the sectarian violence that came out of the Hussein regimes collapse because the Sunni sect his base was built around was still a minority. This time it's the majority and the people being fought against are the Americans, the Israelis and the Arabs so their backs are against the wall this is a total war already from their side. |
|
|
|
| |
| ▲ | saghm 3 hours ago | parent | prev | next [-] | | With the way you've phrased it the government could nuke the entire world; all of the adversaries would be dead along with literally everyone else. I don't really see why it's an issue if a company doesn't want to sell them the tools to do that. | | | |
| ▲ | xrd 6 hours ago | parent | prev | next [-] | | If I start a small business that sells Apples and the US government comes to me and says "we want to buy your apples and fire them at high speed to" these are now your words "kill adversaries through any means necessary." If I say, no, then am I stopping the military? I feel like it is reasonable that I can say "no, I don't want to sell you my apples." I cannot for the life of me figure out why that means I am stopping the military from killing people. The US Military will definitely still be able to kill people for centuries. I'm just saying I don't want to participate in it. | | |
| ▲ | throwaway173738 5 hours ago | parent | next [-] | | More to the point, if everyone stopped selling anything to the military they would still be able to kill people with their bare hands. People are arguably very good at killing people and it takes civilization to train us not to kill each other. | |
| ▲ | ExoticPearTree 5 hours ago | parent | prev [-] | | In the context of the larger discussion, if you already sold apples to the military, you cannot go to them and say you don't like how they're using the apples you sold them. | | |
| ▲ | sfink 4 hours ago | parent [-] | | In the context of the larger discussion, Anthropic thought of that ahead of time and put the restrictions into the contract that the government agreed to. So "already sold" is a non-sequitur; that's not the situation under discussion. |
|
| |
| ▲ | Cantinflas 4 hours ago | parent | prev | next [-] | | That's not their mission, in any country, ever. | |
| ▲ | sixothree 4 hours ago | parent | prev | next [-] | | The problem here is that this department claims its adversaries are Americans. Do you think antropic should aid in the killing of Americans? | | |
| ▲ | ExoticPearTree 2 hours ago | parent [-] | | I don’t believe for a second the Pentagon sees Americans as adversaries. | | |
| ▲ | uxcolumbo 3 minutes ago | parent [-] | | Trump sees many Americans as adversaries (i.e. the 'radical left' like Alex Pretti an ER nurse and Renee Nicole Good - a mother). In his first term he asked whether protestors can be shot in the legs. So in short it doesn't matter what the Pentagon thinks as Trump is the commander in chief and as far as I know the Pentagon has to follow his orders. |
|
| |
| ▲ | throwaway290 6 hours ago | parent | prev | next [-] | | Any company is free to choose its business partners and set terms to them. "Don't like our terms, don't partner with us" If government can force any private company to work specially for government then US is no better than PRC | | |
| ▲ | SoftTalker 5 hours ago | parent [-] | | You might want to read about the War Production Board during World War II. Established by a presidential executive order no less. | | |
| ▲ | throwaway290 5 hours ago | parent [-] | | Wasn't that for defense during an actual war started by another country? Legit war time measures can be a thing (that's why it's fucked if president can just start a war and then use that as excuse for any war time measures they like) | | |
| ▲ | ExoticPearTree 4 hours ago | parent [-] | | "Legit war time measures" is not a thing. If Congress declares war on Cuba or Venezuale for example, people who do not support it will not see the measures as "legit". The US has a lot of precedent of bombing/invading other countries at the whim of presidents without actually calling it a war for decades. And for better or worse, it is actually good that it is like this. Otherwise, if Congress declares war on Iran or China or whatever, the whole country will be put on a war footing, companies will be directed to build whatever the Pentagon says it needs, drafts will be enforced and so on. And it would be pretty ugly. | | |
| ▲ | ithkuil 3 hours ago | parent | next [-] | | If Congress declared an actual war and if they declared to use war time laws to force a private company to comply with the war effort, we wouldn't be having this conversation. What happened was different: a private company decided to enforce some terms, as they can do during peace time and they have been bullied in a way that is disgraceful precisely because it didn't happen during war time nor it has been done using the existing laws around that. What is the purpose of having laws in the first place if we accept that the government can rule by intimidation? | |
| ▲ | throwaway290 3 hours ago | parent | prev [-] | | if you didn't notice we are talking about wwii usa was not aggressor fat chance congress declaring war of aggression on a peaceful country |
|
|
|
| |
| ▲ | hedora 6 hours ago | parent | prev [-] | | Yes, Musk is guilty of treason for exactly that reason. He directly sabotaged a major US military operation in Ukraine. However, the military is bound by US and international law. It's clear they're not going to obey either of those with respect to this contract. On top of that, Anthropic has correctly pointed out that the use cases Trump was pushing for are well beyond the current capabilities of any of Anthropic models. Misusing their stuff in the way Trump has been (in violation of the contract) is a war crime, because it has already made major mistakes, targeted civilians, etc. |
|
|
|
| ▲ | jaredklewis 4 hours ago | parent | prev | next [-] |
| > DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable. I think it’s also possible DoW didn’t care about the conditions but just wanted some pretext to punish Anthropic because Dario isn’t a Trump boot licker like the rest of the SV CEOs. |
|
| ▲ | spwa4 2 hours ago | parent | prev [-] |
| Except if there's one defining property of the last 4-5 administrations it is that they definitely and constantly violate the rules they set for themselves. With every new administration it gets worse and worse. And while this administration is brazen about this, it's not really a drastic change anywhere. In fact most EU laws (GPDR, AI regulation, Chat Control) are directly, up front, declaring they themselves won't respect it. They very directly have one set of rules for states, government employees, ... and ANOTHER set of rules for everyone else. And they're incredibly brazen. For private individuals, companies it goes very far, it's essentially impossible to even know what does and does not violate the GPDR, and you can't ask the courts, that's not allowed. You also cannot use the courts to compel government to do anything under these laws. For governments, when it comes to what's allowed, it goes incredibly far. Governments can declare any action legal under the GPDR, before and after the fact, without parliament involvement. It does not matter if that action was done by the government themselves, or if it's an action by a private company (so the government can use subcontractors for any violation of the GPDR) This means that, for THE example given for GPDR protection: medical information. Medical insurance in the EU is either state-owned or has exceptions, the law does the exact opposite of what it appears to do: it makes all your medical information available for medical insurers. And the police (e.g. to find you). And the tax office. And courts. And medical institutions themselves (to deny transplants to smokers). And ... And while doctors (and priests) used to be huge no-no's when it came to information gathering, that's no longer the case. If a doctor uses the state required medical file, your medical information flows straight into a state database, immediately searchable for everyone the GPDR supposedly protects you against. |