Remix.run Logo
NinjaTrance 3 hours ago

Interesting reading.

They are still focusing on "catastrophic risks" related to chemical and biological weapons production; or misaligned models wreaking havoc.

But they are not addressing the elephant in the room:

* Political risks, such as dictators using AI to implement opressive bureaucracy. * Socio-economic risks, such as mass unemployement.

dgellow 19 minutes ago | parent | next [-]

It’s because that would be fairly speculative and cannot be measured. I don’t think that’s something that would make much sense in a system card. But Anthropic leadership does seem to communicate on that topic: https://www.darioamodei.com/essay/the-adolescence-of-technol...

ronsor an hour ago | parent | prev | next [-]

> Political risks, such as dictators using AI to implement opressive bureaucracy.

I think we're pretty good at that without AI.

jph00 2 hours ago | parent | prev | next [-]

Yeah this has always been the glaring blind spot for most of the "AI Safety" community; and most of the proposals for "improving" AI safety actually make these risks far worse and far more likely.

astrange 28 minutes ago | parent | prev | next [-]

The unemployment rate in the US is whatever the Fed wants it to be, and isn't a function of available technology.

unglaublich 2 hours ago | parent | prev | next [-]

> * Political risks, such as dictators using AI to implement opressive bureaucracy. * Socio-economic risks, such as mass unemployement.

Even Haiku would score 90% on that.

andrewstuart2 2 hours ago | parent | prev | next [-]

I'm getting flashbacks to the 2018 hit:

    This is extremely dangerous to our democracy
We evolved to share information through text and media, and with the advent of printing and now the internet, we often derive our feelings of consensus and sureness from the preponderance of information that used to take more effort to produce. Now we're now at a point where a disproportionately small input can produce a massively proliferated, coherent-enough output, that can give the appearance of consensus, and I'm not sure how we are going to deal with that.
girvo an hour ago | parent | prev [-]

They don’t care about those risks, because they’re unsolvable and would mean they wouldn’t make money/gain power.

dgellow 22 minutes ago | parent [-]

Dario Amodei, CEO of Anthropic discusses all those risks in this essay: https://www.darioamodei.com/essay/the-adolescence-of-technol...

He seems to care quite a lot?