| ▲ | latexr 5 hours ago |
| > “We felt that it wouldn't actually help anyone for us to stop training AI models,” How magnanimous! They are only thinking of others, you see. They are rejecting their safety pledge for you. > “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.” Oops, said the quiet part out loud that it’s all about money. “I mean, if all of our competitors are kicking puppies in the face, it doesn’t make sense for us to not do it too. Maybe we’ll also kick kittens while we’re at it”. For all of you who thought Anthropic were “the good guys”, I hope this serves as a wake up call that they were always all the same. None of them care about you, they only care about winning. |
|
| ▲ | isodev 3 hours ago | parent | next [-] |
| Indeed, Anthropic can’t afford to be the ones that impose any kind of sense in the market - that’s supposed to be the job of the government by creating policy,
regulations and installing watchdogs to monitor things. But lucky for the AI companies, most of them are based in place that only has a government on paper and everyone forgot where that paper is. |
| |
| ▲ | nickserv 3 hours ago | parent [-] | | The government is why they are dropping their pledge. https://apnews.com/article/anthropic-hegseth-ai-pentagon-mil... | | |
| ▲ | isodev 2 hours ago | parent [-] | | That's because their government is asking for things that shouldn't be asked - again, no regulation, no oversight. | | |
| ▲ | nickserv 2 hours ago | parent [-] | | The government is forcing them to change their policy, by definition that is regulation and oversight. Let's say that the government was forcing a company to change their overall right-to-repair or return policy in order to avoid being on a blacklist, would that not be seen as oversight and regulation? Whether the regulation is legitimate or of benefit is a different argument. | | |
| ▲ | isodev 29 minutes ago | parent | next [-] | | You misunderstand - a government normally represents the people, we appoint them to well, govern, in our name. I understand how this is confusing in a place like the US, where the government often seems to represent the business (or lately a small group of poor examples of humanity), not the people. | | |
| ▲ | peterfirefly 7 minutes ago | parent [-] | | Normally? All governments are in the egg-breaking business some of the time. Most of them are most of the time. Some of them all of the time. Very few are good at making omelettes. |
| |
| ▲ | GrinningFool an hour ago | parent | prev [-] | | I think GP was referred to lack of regulation and oversight over the government. | | |
| ▲ | lupire an hour ago | parent [-] | | Of course, but that is incoherent. Regulation and oversight is government. | | |
| ▲ | toss1 11 minutes ago | parent [-] | | No, it is a famously coherent concept over millenia. Quis custodiet ipsos custodes? "Who will guard the guards themselves?" or "Who will watch the watchmen?" >>A Latin phrase found in the Satires (Satire VI, lines 347–348), a work of the 1st–2nd century Roman poet Juvenal. It may be translated as "Who will guard the guards themselves?" or "Who will watch the watchmen?". ... The phrase, as it is normally quoted in Latin, comes from the Satires of Juvenal, the 1st–2nd century Roman satirist. ...Its modern usage the phrase has wide-reaching applications to concepts such as tyrannical governments, uncontrollably oppressive dictatorships, and police or judicial corruption and overreach... [0] The point is a government that is not overseen by the people devolves into tyranny. So yes, the point is to regulate the regulators and oversee the oversight committee. Anthropic was happy to have it's AI used for military purposes, with two exceptions: 1) no automated killing, there had to be a human in the "kill chain" of command, and 2) no use for mass surveillance. This govt "Dept of War" is demanding Anthropic drop those two safety requirements or it threatens to make Anthropic a pariah. These demands by the govt are both immoral and insane. The "regulator and overseer" needs to be regulated and overseen. [0] https://en.wikipedia.org/wiki/Quis_custodiet_ipsos_custodes%... |
|
|
|
|
|
|
|
| ▲ | nsbk 2 hours ago | parent | prev | next [-] |
| Since it is all about money, I just did vote with my wallet and cancelled the Max subscription |
| |
| ▲ | nullocator an hour ago | parent [-] | | If you're a U.S. citizen, tax dollars from you and others will backstop any cancelled subscriptions, I guess good on you for not trying to pay them twice, though you get zero benefit with this approach. | | |
| ▲ | vibrio an hour ago | parent [-] | | You've succinctly identified and communicated a real problem. In your opinion, what is the best approach, if any, to attempt to address it? | | |
| ▲ | chasd00 41 minutes ago | parent [-] | | > In your opinion, what is the best approach, if any, to attempt to address it? There aren't many options for fighting the tax man, "In this world nothing can be said to be certain, except death and taxes". You're only option is to leave the US for somewhere better. | | |
| ▲ | b112 22 minutes ago | parent [-] | | I guess you don't know about how taxes work for Americans? Living abroad typically changes nothing, they still owe tax. Maybe an American can chime in here on this... |
|
|
|
|
|
| ▲ | watwut 3 hours ago | parent | prev | next [-] |
| > Oops, said the quiet part out loud that it’s all about money. “I mean, if all of our competitors are kicking puppies in the face, it doesn’t make sense for us to not do it too. Maybe we’ll also kick kittens while we’re at it”. I mean, yes, that is actually how world works. That is why we need safety, environmental and other anti-fraud regulations. Because without them, competition makes it so that every successful company will fraud, hurt and harm. Those who wont will be taken over by those who do. |
| |
| ▲ | rco8786 3 hours ago | parent | next [-] | | Yes, this. It's unfortunate that anthropic dropped this and it's also exactly how the system is supposed to work. Companies don't regulate themselves, the government regulates the companies. Now, you may notice that the government is also choosing not to regulate these companies...which is another matter altogether. | | |
| ▲ | ozmodiar 2 hours ago | parent | next [-] | | It's so much worse than that. The government actively encourages a lack of business ethics. Heck, it started the term with a crypto rug pull. Money continues to funnel upward to all the worst players, and watchdogs are being targeted and destroyed. Even if you get new people in power, you're going to find the upper echelons completely full of outlandishly wealthy, morally bankrupt individuals that are very politically active. And now they have access to all of our communications and an AI to sift through it looking for dissent (or to spark its own). I guess this is the end game of "move fast and break things." The situation was never good, but it continues to get worse at an alarming rate. | | |
| ▲ | mschuster91 2 hours ago | parent [-] | | > Heck, it started the term with a crypto rug pull If you ask me... that wasn't a rug pull, at least not in the intent - it more was a way for foreign actors to funnel money directly to Trump and his family without any trace. | | |
| ▲ | lupire 42 minutes ago | parent [-] | | Cryptocurrency is the most traceable money in the world. Cryptocurrency is for implusible deniability, not untraceability. |
|
| |
| ▲ | bumby an hour ago | parent | prev [-] | | There is plenty of precedent that companies are expected to regulate themselves. If you are in the US and perform an engineering role without a license or without working under someone with a license, it’s because of an “industrial exemption.” The premise is that companies have enough standards and processes in place to mitigate that risk. However, there is also plenty of evidence that this setup may no longer work. It seems like the norm has shifted, where companies no longer think it’s their duty to manage risk, only to chase $$$. When coupled with anti-government rhetoric, it effectively socializes the risk to the public but not the profits. | | |
| ▲ | lupire 37 minutes ago | parent [-] | | Am exemption from PE stamping (misguided as it maybe) does not mean unregulated. There are still regulations on designs and builds. | | |
| ▲ | bumby 31 minutes ago | parent [-] | | True to an extent, but those regulations tend to downstream of bad things happening. The exemption means “self-regulation” which is what the OP was speaking to. There are industrial standards, for example, but that’s not a governing body. You can create a design that goes against a standard and there’s nothing to stop you from releasing it to the public. The same can’t be said for those who require licenses and stamped designs. There’s also no explicit individual ethics codes in exempted industries. In contrast, a stamped design is saying the design adheres to good standards. Apropos to HN, somebody could write safety critical software with emergency braking delays because of nuisance alarms and put it on the street without any licensed engineer taking responsibility for it. The governance only comes after an accident and an NTHSB investigation. |
|
|
| |
| ▲ | latexr 2 hours ago | parent | prev [-] | | > I mean, yes, that is actually how world works. And soon enough, it won’t work at all because of it. > Those who wont will be taken over by those who do. And if you compromise on your core values because of money, they weren’t core values to begin with¹. “I want to be ethical but if I am I won’t get to be a billionaire” isn’t an excuse. We shouldn’t just shrug our shoulders at what we see as wrong because “everybody does it” or “that’s just business” or “that’s life”. Complacency and apologists are how a bad system remains bad. https://www.newyorker.com/cartoon/a16995 ¹ I’m willing to give leeway to individuals. You can believe stealing is wrong but if you’re desperate and steal a loaf of bread to feed your kid, there’s nuance. A VC-backed company is something entirely different. |
|
|
| ▲ | surgical_fire 2 hours ago | parent | prev | next [-] |
| > For all of you who thought Anthropic were “the good guys” Was anyone fooled by this? I mean, I know this is HN and there is a demographic here that gets all misty eyed about the benevolence of corporations. It takes a special kind of naivety to believe in those claims. |
|
| ▲ | high_na_euv 4 hours ago | parent | prev | next [-] |
| But what really AI safety is? Censorship? |
|
| ▲ | davidguetta 4 hours ago | parent | prev [-] |
| Still waiting for an explicit answer on understand how 'safety' is truly distinguishable from 'censorship' or 'political correctness' Of course saying to someone to go kill himslef is a prety sure 'no-no' but so many things are up to interpretation. I VERY LARGELY prefer an AI like grok that doesn't pretend and let the onus of interpretation to the user rather than a bunch of anonymous "researchers" that may be equally biased, at the extreme, may tell you that America's founding father were black women |
| |
| ▲ | floatrock 41 minutes ago | parent | next [-] | | Was there actually a case of a model saying "America's founding father were black women", or is that just Elon fingering your amygdala with a ridiculous hypothetical that exists nowhere other than Elon's mind in order to justify Elon's personal bias tweaks when he doesn't like the wisdom-of-the-crowds answer his tools initially give? | | |
| ▲ | bumby 38 minutes ago | parent [-] | | There were well-publicized cases of Gemini producing more diverse founding fathers images, female popes, etc. Also, snarky tone is against the HN guidelines. | | |
| ▲ | floatrock 21 minutes ago | parent [-] | | Sorry, let me give a specific citation of Elon injecting his personal bias into the output of his tools: https://www.theguardian.com/technology/2025/jul/14/elon-musk... As for the "Elon fingering your amygdala with a ridiculous hypothetical" snark, well, I think the HN crowd in particular understands how the culture wars are just theater to push through billionaires' personal self-centered interests at the expense of everyone else. If that level of pull-aside-the-curtains pragmatism is really "snark against HN guidelines", well, I think 3/4 of the comments on the site would be flagged and deleted. | | |
| ▲ | bumby 12 minutes ago | parent [-] | | Your question was “Was there actually a case of a model saying "America's founding father were black women" Whether someone else is injecting different bias is whataboutism. So it seems you are trying to make a different point, but not being clear about it. And your “I think the HN crowd understands…” point is just a “no true Scotsman” fallacy to veil an argument that goes against guidelines. Related to the broader topic, there is a role for self-policing if we don’t want the site to be a cesspool of rage bait. |
|
|
| |
| ▲ | wattsy2025 3 hours ago | parent | prev | next [-] | | The most important part of AI safety is AI alignment: making sure AI does what we want. It's very hard because even if AI isn't trying to deceive you it can have bad outcomes by executing your request to the letter. The classical example is tasking an AI to make paperclips, training the AI with a reward for making more paperclips. Then the AI makes the most paperclips possible by strip mining the Earth and killing anything in its way. Sometimes you see this AI alignment problem in action. I once asked an older model to fix the tests and it eventually gave up and just deleted them | |
| ▲ | chasd00 35 minutes ago | parent | prev | next [-] | | > Still waiting for an explicit answer on understand how 'safety' is truly distinguishable from 'censorship' or 'political correctness' i've said this many times but the concept of ai "safety" is really brand safety. What Anthropic is saying is they're willing to risk some bad press to bypass the additional training and find tuning to ensure their models do not output something people may find outrageous. | |
| ▲ | miltonlost 11 minutes ago | parent | prev | next [-] | | david guetta, if that really is you, stick to music rather than using Nazi man's propaganda machine | |
| ▲ | gehwartzen 4 hours ago | parent | prev | next [-] | | Well we teach kids not to yell “Fire!” In a crowded theatre or “N***!“ at their neighbor. We also teach our industrial machines to distinguish between fingers and bolts, our cars to not say “make a left turn now” when on a bridge, etc | | |
| ▲ | rudhdb773b 3 hours ago | parent [-] | | The critical point is who the "we" is. Is "we" the parents teaching their children their own unique values, or is the "we" a government or corporation forcing one set of values on all children. Why not encourage the users of AI to use a Safety.md (populated with some reasonable but optional defaults)? | | |
| ▲ | dminik 3 hours ago | parent [-] | | There's nothing a meaningless document can do when the AI is not aligned in the first place. | | |
| ▲ | lupire 44 minutes ago | parent [-] | | "alignment" is the computer version for (philosophical not medical) "consciousness", a totally subjective, immeasurable concept. |
|
|
| |
| ▲ | SlinkyOnStairs 2 hours ago | parent | prev [-] | | > I VERY LARGELY prefer an AI like grok that doesn't pretend and let the onus of interpretation to the user rather than a bunch of anonymous "researchers" that may be equally biased, at the extreme, may tell you that America's founding father were black women Setting aside for a moment that Grok is manipulated and biased to a hilarious extent. ("Elon is world champion at everything, including drinking piss") There is no such thing as "unbiased". There will always be bias in these systems, whether picked up from the training data, or the choices made by the AI's developers/researchers, even if the latter doesn't "intend" to add any bias. Ignoring this problem doesn't magically create a bias-free AI that "speaks the truth about the founding fathers". The bias in the training data, the implicit unconcious bias in the design decisions, that didn't come out of thin air. It's just somebody else's bias. All the existing texts on the founding fathers are filled with 250 years of bias, propaganda, and agenda pushing from all sorts of authors. There is no way to have no bias, no propaganda, no "agenda pushing" in the AI. The only thing that can be done is to acknowledge this problem, and try to steer the system to a neutral position. That will be "agenda pushing" of one's own, but that's the reality of all history and all historians since Herodotus. You just have to be honest about it. And you will observe that current AI companies are excessively lazy about this. They do not put in the work, but instead slap on a prompt begging the system to "pls be diverse" and try to call it a day. This does not work. > Of course saying to someone to go kill himslef is a prety sure 'no-no' but so many things are up to interpretation. Bear in mind that the context of Anthropic's pivot here are the Pentagon's dollars. This isn't just about "anti-woke AI", it's about killbots. Sure, Hegseth wants his robots to not do thoughtcrime about, say, trans people or the role of women in the military. But above all he wants to do a lot of murder. Antrophic dropping their position of "We shouldn't turn this technology we can barely control into murder machines" because they're running out of money is damnable. | | |
| ▲ | lupire an hour ago | parent [-] | | You understood the issue so well but still made the mistake you identified, by claiming that "neutral" exists. "Neutral" is a synonym for "bias toward status quo" |
|
|