▲ | patcon 2 days ago | |
You can't see how a hosted private model (that can monitor usage and adapt mechanisms to that) has a different risk profile than an open weight model (that is unmonitorable and becomes more and more runnable on more and more hardware every month)? One can become more controlled and wrangle in the edge-cases, and the other has exploding edges. You can have your politics around the value of open source models, but I find it hard to argue that there aren't MUCH higher risks with the lack of containment of open weights models | ||
▲ | rwmj 2 days ago | parent [-] | |
You're making several optimistic assumptions: The first is that closed source companies are interested in controlling the risk of using their technology. This is obviously wrong: Facebook didn't care its main platform enabled literal genocide. xAI doesn't care about the outputs of their model being truthful. The other assumption is that nefarious actors will care about any of this. They'll use what's available, or make their own models, or maybe even steal models (if China had an incredible AI, don't you think other countries would be trying to steal the weights?). Bad actors don't care about moral positions, strangely enough. |