| ▲ | mvkel 5 hours ago | |||||||||||||||||||||||||
Good optics, but ultimately fruitless. If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face. The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys. | ||||||||||||||||||||||||||
| ▲ | madrox 4 hours ago | parent | next [-] | |||||||||||||||||||||||||
I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | adi_kurian 3 hours ago | parent | prev [-] | |||||||||||||||||||||||||
A little pessimistic of a take, IMO. You may very well be right, though. | ||||||||||||||||||||||||||