Remix.run Logo
bryan0 a day ago

> Why do AI companies want us to be afraid of them? ... According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they're already doing to the world.

People seem unable to make up their mind if AI is very dangerous or is it not. I think what the AI companies and this author agree on, is that this technology is potentially extremely dangerous. AI impacts labor markets, the environment, warfare, mental health, etc... It's harder now to find things which it will not impact.

So if we agree that AI is potentially dangerous, it makes the title question moot: Both AI companies and this author want people to be aware of the dangers that AI poses to society. The real question is what do we do about it?

The nuance here is that AI can be incredible positive as well. It's like the invention of fire, you can use it for good or bad, and there will be many unintended consequences along the way.

We could legislate and ban AI tech. People have proposed this seriously, yet this feels completely unrealistic. If the US bans AI research, then this research will move elsewhere. I think it is like trying to ban fire because it's dangerous: some groups will learn to work with fire and they will get an extreme advantage over those groups that don't. (or they will destroy themselves in the process).

So maybe instead of demonizing the AI companies, we have a nuanced debate about this tech and propose solutions that our best for our society?

Tangurena2 a day ago | parent | next [-]

> People seem unable to make up their mind if AI is very dangerous or is it not.

This is a propaganda tactic. For decades, tobacco companies claimed that there was no evidence that smoking was bad for one's health. Then, only after losing dozens of lawsuits did the propaganda switch to "but everyone knew for 100+ years that smoking was lethal".

One can read about it by reading Trust Us, We're Experts, or Toxic Sludge Is Good For You, or the other books written by the authors.

https://en.wikipedia.org/wiki/Trust_Us,_We%27re_Experts

https://www.prwatch.org/tsigfy.html

bryan0 a day ago | parent [-]

Please explain how this tactic relates here. In this case we have the AI companies saying this technology is potentially very harmful, in fact existential. This seems the complete opposite of what big tobacco did.

What I meant by

> People seem unable to make up their mind if AI is very dangerous or is it not.

Is that the article says 2 contradictory things:

1. AI companies are misleading us when they say their tech is dangerous and people should be afraid.

2. AI is currently very dangerous and people should be afraid.

Anecdotally, people on the internet (including HN), seem unable to agree on whether AI is real or overblown "hype".

dodu_ a day ago | parent | prev | next [-]

>So maybe instead of demonizing the AI companies, we have a nuanced debate about this tech and propose solutions that our best for our society?

These are not mutually exclusive.

Calling out the demonic behavior of trying to coerce people into using your product out of fear is not an indictment of the underlying technology itself.

bryan0 a day ago | parent [-]

One of the points I was trying to make is that the statement:

> trying to coerce people into using your product out of fear

is nonsense.

Everyone agrees that there are legitimate reasons to be fearful of this technology, this is not a fabrication, but we need to figure out how to proceed in a safe and constructive way.

What "coercion" is occurring here? Either you find the technology valuable and you want to pay for it, or you find it not useful (or worse harmful), and you do not want to pay for it.

Maybe another way of putting it, what do you think the frontier AI companies should do in this situation? It seems that being straightforward with the dangers is correct thing to do, and probably being overly cautious is prudent. You could go further and argue they should slow down or stop development, but that is something that the govt should impose, we should not expect or trust the companies to do this themselves. Ironically, in the Anthropic / Pentagon case, we have Anthropic trying to pump the brakes and put up guardrails while the govt wants to go full-steam ahead with autonomous warfare.

The other issue with slowing down / pausing development is it requires an unheard of level of agreement, even with companies in China, or else it will probably not be effective. You could argue this is not even possible at this point.

autoexec a day ago | parent | prev [-]

> People seem unable to make up their mind if AI very dangerous is it not.

Pretty much everyone agrees that what passes for AI these days is very dangerous. People only differ in which ways they think it is (or will be) dangerous and which dangers they are most worried about.

Some are worried about the environmental harms. Some are worried that AI will do a very shitty job of doing very important things, but that companies will use it anyway because it saves them money and we'll suffer for it. Some are worried that AI will take their jobs regardless of how well that AI performs. Some are worried that AI will make their jobs suck. You've also got people who think that our glorified chatbots are going to gain consciousness and become literal gods who will take over the planet and usher in the Robot Wars.

Some of those dangers are clearly more immediate and realistic than others. We should probably be focused on those right now. We can start by limiting the environmental harms they're causing and making companies responsible for the costs and impacts they have on our environment. Maybe make it illegal for power companies to raise the price of power for individuals just because some company wants to build a bunch of power hungry data centers. Let those companies fully bear the costs instead.

We can make sure that anyone using AI for any reason cannot use AI as a defense for the harms their use of AI causes. If a company uses AI to make hiring decisions and the result is discrimination, an actual human at that company gets held legally accountable for that. If AI hallucinates a sale price, the company must honor that price. If AI misidentifies a suspect and an innocent person ends up behind bars a human gets held accountable.

We can ban the use of AI for things like autonomous weapons. Things that are too important to trust to unreliable AI.

We could even do more extreme things like improve our social safety nets so that if people are put out of work they don't become homeless, or invest more in the creation of AI individuals can host locally so we aren't forced to hand so much power to a few huge companies, or even force companies to release their models or their training data (which they mostly stole anyway) so that power doesn't consolidate into a small number of companies or individuals. We have lots of options, it just comes down to what we want and how much we can get our elected officials to represent our interests over the interests of the companies who are stuffing their pockets with cash.