Remix.run Logo
neya 6 hours ago

>Dario on the other hand seems to have an integrity that's particularly rare in this era.

Anthropic actually partnered up with Palantir. They are not the saints you think they are, either.

We should stop worshipping people and companies and stop putting them on pedestals. Just because one party is at fault, doesn't mean the other is automatically innocent. These are all for-profit companies at play here.

https://investors.palantir.com/news-details/2024/Anthropic-a...

devinplatt 3 hours ago | parent | next [-]

FWIW he gives his ethical reasoning on his website:

> Broadly, I am supportive of arming democracies with the tools needed to defeat autocracies in the age of AI—I simply don’t think there is any other way. But we cannot ignore the potential for abuse of these technologies by democratic governments themselves. Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies. Thus, we should arm democracies with AI, but we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.

Basically, he's afraid that not arming the government with AI puts it at a disadvantage vs. other governments he trusts less. Plus, if Anthropic is in the loop that gives them the chance to steer the direction of things a bit (what they were kicked out for doing).

It's not the purest ethical argument, but I also would not say that there is a clearly correct answer.

neya 3 hours ago | parent [-]

Basically he's asking everyone to trust him that he won't cross the line himself. Whatever argument he makes for democracies applies to him as well, and he's not somehow above it. That's the flaw in his argument.

Brutally honest, to me it just sounds like a very elaborate way to say "trust me, bro"

vanillameow 2 hours ago | parent [-]

I would agree if not for the fact that they just let a $200M contract slip through over it. You could argue it's "safety theater" in itself but that seems like a risky gambit especially with this administration. I definitely trust Anthropic more than OpenAI. In fact I'd go as far as to say it's probably pretty imperative that Anthropic stays a frontrunner in this race and doesn't leave the field exclusively to OAI (and maybe Google which is just as bad). That doesn't mean I'm exactly happy with Anthropic's comments like "mass surveillance bad but only for the US". But Anthropic at least regularly asks questions about the direction of AI development. I haven't seen the other frontier model companies do any such thing.

taurath 40 minutes ago | parent [-]

What does $200m mean for someone who thinks a trillion in revenue is likely among AI companies in the next 5 years? Which is a real quote.

vanillameow 21 minutes ago | parent [-]

Regardless, I think if you are thinking purely from a ruthless business standpoint then standing up to the DoD was an incredibly ill-advised move. It's basically free financial and technological backing at the cost of ethics. Additionally, basically everyone with functioning eyeballs knows that the current US administration is incredibly vindicative, reckless and short-tempered. I would agree that in a more tame administration, you might do something like this as a publicity stunt. In the Trump administration, and while the AI arms race is still in full force, it feels like there has to be at least somewhat genuine sentiment behind it, otherwise it just doesn't really make sense. Like what do they accomplish from this? You'll get some users who will view you more favourably for it but it probably won't make up for the lost revenue, and no matter how many people like you, if you are first to AGI in this industry you win. The prior sentiment basically won't matter at that point. In the most critical interpretation I guess you could say if the bubble pops it might be more of a matter of sentiment. I don't know, in my mind the math just doesn't work for it to be a business move.

fmajid 6 hours ago | parent | prev | next [-]

If you look at his comments about Palantir and their proposed safeguards, it's clear it's a case of "if you are dining with the Devil, you'd better bring a very long spoon"

neya 5 hours ago | parent [-]

These comments were after the deal had soured. Not before. If it was a case of such morality, the partnership with Palantir would have never happened in the first place.

The contract was explicit - it was for defence purposes with a company known for spying activities. So, obviously spying is involved and they weren't just going to generate cat videos with it.

Again, nobody is innocent here.

dota_fanatic 6 hours ago | parent | prev | next [-]

I've heard Palantir is essentially the only federal cloud vendor with this administration for secure services. By "partnered up with Palantir", do you mean they provided their models to the government? Or something more?

neya 5 hours ago | parent [-]

From the title of the link enclosed:

"Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations"

Keywords: "Government Intelligence"

xvector 5 hours ago | parent | prev [-]

If you actually read the memo they've clearly put in strict terms with Palantir and rejected many of the false "safeguards" offered by the company