▲ | achierius 3 days ago | |||||||
> The "Open" in "Open Source" is what matters most by far, the same way that the "Free" in "Free Software" is the key word that qualifies the kind of software we're taking about. I don't think this is true. If someone said "look, my software is open source" and by "source" they meant the binary they shipped, the specific definition of "open" they chose to use would not matter much for the sort of things I'd like to do with an open source project. Both are important. | ||||||||
▲ | lolinder 3 days ago | parent | next [-] | |||||||
I agree that both matter, but one is more important than the other. If they released the binary as "Open Source" but had a long list of things I wasn't allowed to do with it, the fact that they didn't release the source code would be of secondary concern to the fact that they're calling it "Open" while it actually has a trail of legal landmines waiting to bite anyone who tries to use it as free software. And that's with a clear cut case like a binary-only release. With an LLM there's a lot of room for debate about what counts as the preferred form for making modifications to the work (or heck, what even counts as the work). That question is wide open for debate, and it's not worth having that debate when there's a far more egregious problem with their usage. | ||||||||
▲ | int_19h 3 days ago | parent | prev | next [-] | |||||||
The catch is that the benefits of open vs non-open don't translate neatly from software to models. If software is binary-only, is it exceedingly difficult to change it in any kind of substantial way (you can change the machine code directly, of course, but the very nature of the format makes this very limited). OTOH with a large language model with open weights but without open training data - the closest equivalent to open source for software - you can still change its behavior very substantially with finetuning or remixing layers (from different models even!). | ||||||||
| ||||||||
▲ | NitpickLawyer 3 days ago | parent | prev [-] | |||||||
> and by "source" they meant the binary they shipped Common misconception. Weights are not binary. Weights are hardcoded values that you load into an (open-source or closed-source) engine and you run that engine. The source code for LLMs is both in the architecture (i.e. what to do with those hardcoded values) and the inference engines. As opposed to binaries, you can modify weights. You can adjust them, tune them for downstream tasks and so on. And more importantly, in theory you the downloader and "company x" the releaser of the model use the same methods and technologies to modify the weights. (in contrast to a binary release where you can only modify only the machine language while the creator can modify the source-code and recompile). LLamas aren't open source because the license under which they're released isn't open source. There are plenty of models that are open source tho: mistrals (apache 2.0), qwens (apache2.0), deepseeks (mit), glms (mit) and so on. | ||||||||
|