Remix.run Logo
rexpop 3 hours ago

Cal Newport and tech commentator Ed Zitron discussed this disparity between Anthropic's public image and their actual practices. Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has been deeply integrated with the US military, having been installed with classified access since June 2024. The podcast highlights that Claude has been actively utilized during the "Venezuela incursion" and the ongoing "war in Iran".

Despite this active involvement, CEO Dario Amodei released a statement attempting to publicly distance the company from the Department of Defense by declaring they would not allow their technology to be used for "mass domestic surveillance" or "fully autonomous weapons". Zitron categorizes this as a highly calculated PR maneuver, pointing out that LLMs are fundamentally incapable of controlling autonomous weapons anyway. The stunt successfully manufactured a wave of positive press—with celebrities and commentators praising Anthropic as an ethical objector—right when the company was trying to secure an IPO or a massive ~$100 billion valuation, all while they quietly remained an active part of the war effort.

Beyond their military contracts, the podcast details several highly questionable business practices Anthropic has used to artificially inflate their numbers:

1. During a lawsuit regarding their military contract, Anthropic's CFO filed a sworn affidavit revealing the company had only made $5 billion in its entire lifetime. This directly contradicted leaked media reports suggesting they made $4.5 billion in 2025 alone. It revealed that the company's publicly perceived run rate was heavily exaggerated through the "shady revenue math" popular in Silicon Valley, a major discrepancy that most financial journalists ignored.

2. When the open-source agent library OpenClaw first launched, Anthropic deliberately allowed users to connect a $200/month "max account" and essentially burn through thousands of dollars of API compute at Anthropic's expense. Zitron points out that Anthropic knowingly let this happen to temporarily boost their usage metrics and hype while they raised a $30 billion funding round. Just weeks after securing the funding, they abruptly cut off access for these users, a move Zitron cites as proof of them being an "unethical company".

Furthermore, the company has faced criticism for gaslighting users, maintaining poor service availability, and silently degrading model performance while rug-pulling users on rate limits. As Zitron summarizes, it is highly unlikely that either Anthropic or OpenAI actually care about these ethical boundaries beyond how they can be weaponized for better PR and higher valuations.

aesthesia 2 hours ago | parent | next [-]

There's some validity to these criticisms, but it would be a lot more credible to cite someone whose job isn't "loudly promote any claim that sounds negative for AI, regardless of how well-founded it is."

petcat 2 hours ago | parent | prev | next [-]

> Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has taken 10s of billions from investors just like everyone else has. There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

So yes, this is obvious despite whatever image they try to cultivate.

fwipsy 2 hours ago | parent | next [-]

Anthropic is a public benefit corporation which limits liability to shareholders.

Just because they screwed up their billing doesn't mean every ethical commitment they've ever made is bunk.

bluefirebrand 2 hours ago | parent | prev [-]

> There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

At that scale, ethics and morality should become more important, not discarded

GolfPopper an hour ago | parent | next [-]

Alternatively, finance at that scale ought not be permitted to exist, because of the moral hazard it represents.

voakbasda an hour ago | parent | prev [-]

You will find that morals and ethics at that scale are too expensive to maintain.

bluefirebrand an hour ago | parent [-]

Then that scale should not be allowed to exist and we should fight aggressively to prevent it

avarun an hour ago | parent | prev | next [-]

Ed Zitron has absolutely zero credibility, meaning these claims have zero credibility.

rickydroll 2 hours ago | parent | prev | next [-]

I think all the AI companies want to hook up with the US military, as it's the only way they'll cover their debt. For investors.

GolfPopper an hour ago | parent [-]

"You must destroy the economy to keep us afloat, because National Security!" has been a clear goal of the LLM hucksters for a long time.

fwipsy 2 hours ago | parent | prev [-]

"LLMS are fundamentally incapable of controlling autonomous weapons" -- This was Anthropic's stance too, right?

"Quietly remained an active part of the war effort" - anthropic was totally transparent about it, but yeah not great.

"Leaks were wrong" - and that's Anthropic's fault?

OpenAI agreed to assist the DoD with zero boundaries and then lied about it. Can we at least give them credit for not doing that? If we just throw up our hands and say "they're all awful, whatever" then the result is reduced pressure on them to be better. Like it or not, I do not think AI is going away and as far as I can tell, despite billing problems, Anthropic's still the least bad frontier lab.