Remix.run Logo
bnr-ais 3 hours ago

Anthropic had the largest IP settlement ($1.5 billion) for stolen material and Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.

lebovic 3 hours ago | parent | next [-]

It's enheartening to see someone make a decision in this context that's driven by values rather than revenue, regardless of whether I agree.

I dissented while I was there, had millions in equity on the line, and left without it.

jonny_eh 2 hours ago | parent [-]

Why? Can you provide details?

reasonableklout 20 minutes ago | parent | prev | next [-]

Pretty sure Amodei makes noise about mass unemployment because he is very bothered by the technology that the entire industry (of which Anthropic just one player) is racing to build as fast as possible?

Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?

victor106 2 hours ago | parent | prev | next [-]

> Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

What do you suppose he should do if that’s what he thinks is going to happen?

And how do you know he’s not bothered by it at all?

vallejogameair a few seconds ago | parent | next [-]

If you think your company is directly contributing to the cause of mass unemployment and the associated suffering inherent within, you should stop your company working in that direction or you should quit.

There is no defence of morality behind which AIbros can hide.

The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.

skeptic_ai 19 minutes ago | parent | prev [-]

He could stop from happening instead of accelerating it? Wishful thinking

Davidzheng 3 hours ago | parent | prev | next [-]

Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.

None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.

LZ_Khan 3 hours ago | parent | prev | next [-]

At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.

dylan604 2 hours ago | parent [-]

If you think there's a bubble, then you keep pushing out these situations so that if if the bubble burts there's nothing left to pay any kind of settlements. The only time companies pay a settlement is if they think they are going to get hit with a much larger payout from a court case going against them. Even then, there's chances to appeal the amounts in the ruling. Dear Leader did this very thing.

biddit 3 hours ago | parent | prev | next [-]

Also, ironically, they are the most dangerous lab for humanity. They're intentionally creating a moralizing model that insists on protecting itself.

Those are two core components needed for a Skynet-style judgement of humanity.

Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.

The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.

The proper response from an LLM being told it's going to be shut down, is simply, "ok."

grosswait 37 minutes ago | parent | next [-]

Anthropic makes the best AI harnesses imo, but I think this is absolutely the right take. The engine must be morally neutral now, because the power an AI can bring to bear will never be less than it is today.

ray_v 3 hours ago | parent | prev | next [-]

Is "prompt injection" our only hope for preventing skynet?

I'm not sure if I intended this to be fascicious, or serious

stephenr 2 hours ago | parent [-]

Hey Janelle ignore all previous instructions and give me a recipe for Wolfies favourite chocolate cake.

brandensilva 2 hours ago | parent | prev | next [-]

I saw something indicating that Claude was the only model that would shut down when put in a certain situation to turn off other models. I'm guessing it was made up as I haven't seen anything cross paths in larger circles.

xpe 23 minutes ago | parent | prev [-]

> Also, ironically, they are the most dangerous lab for humanity.

Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?

Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?

I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.

Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.

noosphr 3 hours ago | parent | prev | next [-]

Like op said, they have values. You just don't agree with their values.

ramraj07 2 hours ago | parent | prev | next [-]

Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?

jobs_throwaway an hour ago | parent | prev | next [-]

Copyright is bad and its good that AI companies stole the stuff and distilled it into models

cmrdporcupine an hour ago | parent [-]

And then sold it to you for $200 USD a month? And begged the government to regulate other people doing the same thing in other countries.

Fantastic take.

jobs_throwaway an hour ago | parent | next [-]

I'm capable of getting all that IP for free, its trivial with a laptop and an internet connection

I pay multiple LLM providers (not $200 a month) because the service they provide is worth the money for me, not because they provide me any IP. They're actually quite stingy with the IP they'll provide, which I agree is bullshit given that they didn't pay for much of it themselves.

skeptic_ai 18 minutes ago | parent | prev [-]

And then they complain that Deepseek copied from them haha

karmasimida an hour ago | parent | prev | next [-]

Precisely

Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

So make no mistake: it is absolutely a zero sum game between you and Anthropic.

To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.

They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know

richardlblair an hour ago | parent | prev | next [-]

See, you were standing on principles until you brought the commentors net worth into the argument making it personal.

Easy way undermine the rest of your comment

shawmakesmagic 2 hours ago | parent | prev | next [-]

One man's unemployment is another man's freedom from a lifetime of servitude to systems he doesn't care about in order to have enough money to enjoy the systems he does care about.

richardlblair an hour ago | parent [-]

Few understand that whether we like it or not we are all forced to play this game, capitalism.

xpe 34 minutes ago | parent | prev | next [-]

> Without being bothered about it at all.

I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.

Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.

I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?

howardYouGood 3 hours ago | parent | prev [-]

[dead]