Remix.run Logo
philip1209 4 hours ago

We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.

All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.

solenoid0937 3 hours ago | parent | next [-]

> whether the company that branded itself as the ethical AI lab actually is one

FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.

Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.

We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.

Bolwin 4 minutes ago | parent | next [-]

I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.

It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.

root_axis 3 hours ago | parent | prev | next [-]

Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.

From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.

JumpCrisscross an hour ago | parent | next [-]

> every engineer in the bay area has a way of framing the business they work for as a benign force for good

This isn't remotely true in my experience. The senior folks I know at Meta, for example, pretty much concede they're ersatz drug dealers.

solenoid0937 an hour ago | parent | prev [-]

TBH I have worked at multiple FAANG and I don't know anyone other than maybe new grads that actually drank the koolaid.

Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.

So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.

hypersoar 2 hours ago | parent | prev | next [-]

I can believe that such an atmosphere exists there. I can't believe that it will stay. It will be squeezed out by the drive for profit in time.

xvector an hour ago | parent [-]

It might stick tbh. Their PBC+LTBT structure severely limits the power of shareholders. https://www.anthropic.com/news/the-long-term-benefit-trust

foolswisdom 3 hours ago | parent | prev [-]

I think cynicism is deserved just from observing Dario's remarks.

giwook 3 hours ago | parent | prev | next [-]

There may be a reason why Altman is talked about a lot. This article in particular surfaces real information and new perspectives we've not heard in this level of detail before on some pretty significant topics that will be impacting you, me, and pretty much everyone we know not only today but well into the future.

You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.

But just because that's true doesn't mean this article isn't very much relevant and needed.

Because it is.

freely0085 3 hours ago | parent [-]

The New Yorker has given plenty of coverage about Anthropic in their past issues earlier this year.

ronanfarrow 4 hours ago | parent | prev | next [-]

For what it’s worth, the story, while focused on OpenAI, is not uncritical of Anthropic. It explores whether there is a wider race to the bottom in terms of safety, and erosion of even some of Anthropic’s commitments.

k1m 3 hours ago | parent | prev | next [-]

After the US launched its attack on Iran, the ethical AI lab's CEO wrote: "Anthropic has much more in common with the Department of War than we have differences." - https://www.anthropic.com/news/where-stand-department-war

mptest 2 hours ago | parent [-]

"how easy it is, for those of us who play no part in public affairs, to sneer at the compromises required of those who do" - robert harris

Not making any value judgements, but I can see how one might value their interpretability research higher than what the ceo says in a time where the corrupt, criminal executive branch is muscling in to everything from what's written on currency, to journalistic sources. I generally blame fascists before i blame those unable or unwilling to resist them. though obviously, ideally, we'd all lock arms and, together through friendship, crush authoritarians and fascists.

whattheheckheck an hour ago | parent [-]

Seriously blame anyone other than the fucking abuser. These people

basisword 3 hours ago | parent | prev | next [-]

OP says they’ve been working on this for 18 months. Most of what you’ve said wasn’t the case until much more recently.

Nevermark 3 hours ago | parent | prev | next [-]

We should stop talking about potential problems or perpetrators, when we have talked about them “enough”?

That would be irrational.

We should give air time to other problems?

I think everyone agrees with that.

You have managed to distill a surprisingly pure vintage of false dichotomy, from a near Platonic varietal of whataboutism.

xvector 4 hours ago | parent | prev | next [-]

Normies don't know what an "Anthropic" is. They use ChatGPT. Particularly sharp normies might know that ChatGPT is made by OpenAI, and the sharpest might know that Sam Altman is the CEO.

Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.

So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.

3 hours ago | parent | prev | next [-]
[deleted]
_HMCB_ 3 hours ago | parent | prev | next [-]

[flagged]

easterncalculus 3 hours ago | parent | prev [-]

[flagged]