| ▲ | kbelder 5 days ago | |
That is a good point, and I think the takeaway is that there are lots of degrees of freedom here. Open training data would be better, of course, but open weights is still better than completely hidden. | ||
| ▲ | enriquto 5 days ago | parent [-] | |
I don't see the difference between "local, open weights" and "local, proprietary weights". Is that just the handful of lines of code that call the inference? The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't. | ||