▲ | rwmj 2 days ago | |||||||
Definitely a risk, and already happening, but I presume mostly closed source AIs are used for this? Like, people using the ChatGPT APIs to generate spam; or Grok just doing its normal thing. Don't see how the open vs closed debate has much to do with it. | ||||||||
▲ | patcon 2 days ago | parent | next [-] | |||||||
You can't see how a hosted private model (that can monitor usage and adapt mechanisms to that) has a different risk profile than an open weight model (that is unmonitorable and becomes more and more runnable on more and more hardware every month)? One can become more controlled and wrangle in the edge-cases, and the other has exploding edges. You can have your politics around the value of open source models, but I find it hard to argue that there aren't MUCH higher risks with the lack of containment of open weights models | ||||||||
| ||||||||
▲ | shortrounddev2 2 days ago | parent | prev [-] | |||||||
Governments are able to regulate companies like OpenAI and impose penalties for allowing their customers to abuse their APIs, but are unable to do so if Russia's Internet Research Agency is running the exact same models on domestic Russian servers to interfere in US elections. Of course, the US is a captured state now and so the current US Government has no problem with Russian election interference so long as it benefits them |