Remix.run Logo
bryan0 a day ago

One of the points I was trying to make is that the statement:

> trying to coerce people into using your product out of fear

is nonsense.

Everyone agrees that there are legitimate reasons to be fearful of this technology, this is not a fabrication, but we need to figure out how to proceed in a safe and constructive way.

What "coercion" is occurring here? Either you find the technology valuable and you want to pay for it, or you find it not useful (or worse harmful), and you do not want to pay for it.

Maybe another way of putting it, what do you think the frontier AI companies should do in this situation? It seems that being straightforward with the dangers is correct thing to do, and probably being overly cautious is prudent. You could go further and argue they should slow down or stop development, but that is something that the govt should impose, we should not expect or trust the companies to do this themselves. Ironically, in the Anthropic / Pentagon case, we have Anthropic trying to pump the brakes and put up guardrails while the govt wants to go full-steam ahead with autonomous warfare.

The other issue with slowing down / pausing development is it requires an unheard of level of agreement, even with companies in China, or else it will probably not be effective. You could argue this is not even possible at this point.