Remix.run Logo
staminade a day ago

AI company leaders didn't invent this concern about the potential dangers of AI, either as a cause of economic disruption, or as a potential extinction risk. Superintelligence was published in 2014, and even then it wasn't a new topic. Technologists, philosophers and science fiction authors have been discussing and speculating about AI risk for decades.

Also, the idea that AI leadership seized on and amplified these concerns purely for marketing purposes isn't plausible. If you're attempting to market a new product to a mass audience, talking about how dangerous and potentially world-ending it is is the most insane strategy you could choose. Any advantage in terms of getting people's attention is going to be totally outweighed by the huge negative associations you are creating in the minds of people who you want to use your product, and the likelihood of bringing unwanted scrutiny and regulation to your nascent industry.

(Can you imagine the entire railroad industry saying, "Our new trains are so fast, if they crash everybody on board will die! And all the people in the surrounding area will die! It'll be a catastrophe!" They would not do this. The rational strategy is to underplay the risks and attempt to reassure people. Even more so if you think genuinely believe the risks are being overstated.)

Occam's razor suggests that when the AI industry warned about AI risk they believed what they were saying. They had a new, rapidly advancing technology, and absent practical experience of its dangers they referred to pre-existing discussions on the topic, and concluded it was potentially very risky. And so they talked about them in order to prepare the ground in case they turned out to be true. If you warn about AI causing mass unemployment, and then it actually does so, perhaps you can shift the blame to the governments who didn't pay attention and implement social policies to mitigate the effects.

I don't think the AI industry deserve too much of our sympathy, but there is a definite "damned if you do, damned if you don't aspect" to AI safety. If they underplay it, they will get accused of ignoring the risks, and if they talk about it, they get accused scaremongering if the worst doesn't happen.

mghackerlady a day ago | parent | next [-]

>If you're attempting to market a new product to a mass audience, talking about how dangerous and potentially world-ending it is is the most insane strategy you could choose.

except that isn't the segment of the market they're targeting. They're trying to FOMO businesses into paying them, and the businesses play along in part because they (the businesses) don't care about morals nearly as much as the potential profit (sure, a train that kills everyone on board is bad for the people on board, but just think about how efficient shipping will be) and in part because they're scared that by not doing so they'll end up on the business end of how dangerous these new models supposedly are

dinfinity a day ago | parent | prev | next [-]

Another important angle is that the ire of the public falls specifically on people. Google is stepping on the gas just as hard as the other AI companies, but they don't have an uncharismatic CEO drawing in tons of hatred and scrutiny.

We live in an age where influential companies with notable figureheads are seen as evil incarnate and influential companies without notable figureheads as, well, you know, the same old same old greedy companies. It just so happens that the most influential AI companies have notable figureheads, so almost everybody fucking hates them and thinks they're up to no good (whatever they do). Truth is that for most of those companies, taking away the influence of their hated CEO and doing away with their ramblings will change absolutely nothing about how that company operates.

AndrewKemendo a day ago | parent | prev | next [-]

Very well put and I think that covers pretty much everything that needs to be said here.

In fact it has been AI people who have been leading discussions around AI ethics and the dangers of AI since 1955. This is not new and it is consistent.

The new thing is that the average person is now entering into the debate around AI; And like pretty much everything else in the public sphere doing it with entirely no context.

I always love when some total novice encounters a problem in a well studied field as though they’re the first one to encounter it. There’s nothing more narcissistic than some person thinking they are unique in their position with absolutely no demonstration of having done their homework on whether or not this is an established topic in an established field.

That’s where I place 99.9999% of people who are opening their mouth on this topic.

Most of the builders don’t care about this mess and are continuing to work like usual.

goatlover a day ago | parent [-]

> Most of the builders don’t care about this mess and are continuing to work like usual.

So they don't consider it an existential threat, unlike what the CEOs of companies raising hundreds of billions are saying.

AndrewKemendo a day ago | parent [-]

It’s a pointless question

It’s an existential threat if it has existential consequences; if it doesn’t then it isn’t

Can’t know till you build it

mofeien a day ago | parent | prev [-]

[dead]