Remix.run Logo
mvkel 5 hours ago

Good optics, but ultimately fruitless.

If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.

madrox 4 hours ago | parent | next [-]

I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.

mvkel 4 hours ago | parent [-]

Are the guardrails not part of their core? Isn't that the whole premise of their existence?

madrox 3 hours ago | parent [-]

If you read the statement, they explicitly state these guardrails don't exist today, and they want to develop them.

Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.

mvkel an hour ago | parent [-]

That's my point. They formed anthropic under the sole mandate of "guardrails first," now seemingly don't have them at all. So they're just another ai company with different marketing, not the purely altruistic outfit they want everyone to believe

adi_kurian 3 hours ago | parent | prev [-]

A little pessimistic of a take, IMO. You may very well be right, though.