| ▲ | userbinator 9 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||
Closed or open source doesn't matter; it's the ability to control them that's important. People have been cracking and patching for decades without source, but they have that control. Contrast this with remote attestation, where they might show you the source code for everything but you're still powerless to do anything. | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Rohansi 5 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
> Closed or open source doesn't matter; it's the ability to control them that's important. People have been cracking and patching for decades without source, but they have that control. You have no idea what has been baked into the weights in the training process. In theory you could find biases and attempt to "patch" them out, but its a vastly different process vs. patching machine code. Consider what would happen if Google's open weight models were best at writing code targeting Google's services vs. their competitors? Is this something that could be patched? What if there were more subtle differences that you only notice much later after some statistical analysis? | ||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||