|
| ▲ | lolinder 3 days ago | parent | next [-] |
| I agree that both matter, but one is more important than the other. If they released the binary as "Open Source" but had a long list of things I wasn't allowed to do with it, the fact that they didn't release the source code would be of secondary concern to the fact that they're calling it "Open" while it actually has a trail of legal landmines waiting to bite anyone who tries to use it as free software. And that's with a clear cut case like a binary-only release. With an LLM there's a lot of room for debate about what counts as the preferred form for making modifications to the work (or heck, what even counts as the work). That question is wide open for debate, and it's not worth having that debate when there's a far more egregious problem with their usage. |
|
| ▲ | int_19h 3 days ago | parent | prev | next [-] |
| The catch is that the benefits of open vs non-open don't translate neatly from software to models. If software is binary-only, is it exceedingly difficult to change it in any kind of substantial way (you can change the machine code directly, of course, but the very nature of the format makes this very limited). OTOH with a large language model with open weights but without open training data - the closest equivalent to open source for software - you can still change its behavior very substantially with finetuning or remixing layers (from different models even!). |
| |
| ▲ | chme 2 days ago | parent [-] | | > OTOH with a large language model with open weights but without open training data - the closest equivalent to open source for software - you can still change its behavior very substantially with finetuning or remixing layers (from different models even!). The closest thing to open source would be to have open training data. The weights are the binary, the training date is the source and the process of getting the weights is the compilation process. Fintuning or whatever is just modding the binaries. Remixing different layers is creating a workflow pipeline by combining different functions of a binary software package together with components from other binary software packages. |
|
|
| ▲ | NitpickLawyer 3 days ago | parent | prev [-] |
| > and by "source" they meant the binary they shipped Common misconception. Weights are not binary. Weights are hardcoded values that you load into an (open-source or closed-source) engine and you run that engine. The source code for LLMs is both in the architecture (i.e. what to do with those hardcoded values) and the inference engines. As opposed to binaries, you can modify weights. You can adjust them, tune them for downstream tasks and so on. And more importantly, in theory you the downloader and "company x" the releaser of the model use the same methods and technologies to modify the weights. (in contrast to a binary release where you can only modify only the machine language while the creator can modify the source-code and recompile). LLamas aren't open source because the license under which they're released isn't open source. There are plenty of models that are open source tho: mistrals (apache 2.0), qwens (apache2.0), deepseeks (mit), glms (mit) and so on. |
| |
| ▲ | hakre 3 days ago | parent [-] | | what you describe reminds me pretty much of a binary blob that is loaded into a machine or software. additionally modifying data in binary form was a longtime practice last time I looked, but I might not remember correctly. |
|