| ▲ | jsnider3 2 days ago |
| No, it's bad, since we will soon reach a point where AI models are major security risks and we can't get rid of an AI after we open-source it. |
|
| ▲ | rwmj 2 days ago | parent [-] |
| "major security risks" as in Terminator style robot overlords, or (to me more likely) they enable people to develop exploits more easily? Anyway I fail to see how it makes much difference if the models are open or closed, since the barrier to entry to creating new models is not that large (as in, any competent large company or nation state can do it easily), and even if they were all closed source, anyone who has the weights can run up as many copies as they want. |
| |
| ▲ | shortrounddev2 2 days ago | parent [-] | | The risk of AI is that they are used for industrial scale misinformation | | |
| ▲ | rwmj 2 days ago | parent | next [-] | | Definitely a risk, and already happening, but I presume mostly closed source AIs are used for this? Like, people using the ChatGPT APIs to generate spam; or Grok just doing its normal thing. Don't see how the open vs closed debate has much to do with it. | | |
| ▲ | patcon 2 days ago | parent | next [-] | | You can't see how a hosted private model (that can monitor usage and adapt mechanisms to that) has a different risk profile than an open weight model (that is unmonitorable and becomes more and more runnable on more and more hardware every month)? One can become more controlled and wrangle in the edge-cases, and the other has exploding edges. You can have your politics around the value of open source models, but I find it hard to argue that there aren't MUCH higher risks with the lack of containment of open weights models | | |
| ▲ | rwmj 2 days ago | parent [-] | | You're making several optimistic assumptions: The first is that closed source companies are interested in controlling the risk of using their technology. This is obviously wrong: Facebook didn't care its main platform enabled literal genocide. xAI doesn't care about the outputs of their model being truthful. The other assumption is that nefarious actors will care about any of this. They'll use what's available, or make their own models, or maybe even steal models (if China had an incredible AI, don't you think other countries would be trying to steal the weights?). Bad actors don't care about moral positions, strangely enough. |
| |
| ▲ | shortrounddev2 2 days ago | parent | prev [-] | | Governments are able to regulate companies like OpenAI and impose penalties for allowing their customers to abuse their APIs, but are unable to do so if Russia's Internet Research Agency is running the exact same models on domestic Russian servers to interfere in US elections. Of course, the US is a captured state now and so the current US Government has no problem with Russian election interference so long as it benefits them |
| |
| ▲ | BeFlatXIII 2 days ago | parent | prev [-] | | You don't need frontier models to do that. GPT-3 was already good enough. |
|
|