Remix.run Logo
necovek 2 days ago

Where's the training data and training scripts since you are calling this open source?

Edit: it seems "open source" was edited out of the parent comment.

b65e8bee43c2ed0 2 days ago | parent | next [-]

doesn't it get tiring after a while? using the same (perceived) gotcha, over and over again, for three years now?

no one is ever going to release their training data because it contains every copyrighted work in existence. everyone, even the hecking-wholesome safety-first Anthropic, is using copyrighted data without permission to train their models. there you go.

necovek 2 days ago | parent | next [-]

There is an easy fix already in widespread use: "open weights".

It is very much a valuable thing already, no need to taint it with wrong promise.

Though I disagree about being used if it was indeed open source: I might not do it inside my home lab today, but at least Qwen and DeepSeek would use and build on what eg. Facebook was doing with Llama, and they might be pushing the open weights model frontier forward faster.

JumpCrisscross 2 days ago | parent | next [-]

> There is an easy fix already in widespread use: "open weights"

They're both correct given how the terms are actually used. We just have to deduce what's meant from context.

There was a moment, around when Llama was first being released, when the semantics hadn't yet set. The nutter wing of the FOSS community, to my memory, put forward a hard-line and unworkable definition of open source and seemed to reject open weights, too. So the definition got punted to the closest thing at hand, which was open weights with limited (unfortunately, not no) use restrictions. At this point, it's a personal preference that's at most polite to respect if you know your audience has one.

necovek 2 days ago | parent [-]

The point is that "open source" by now has an established and widespread definition, and a "source" hints that it is something a thing is built from that is open.

Is this really a debate we still need to be having today? Sounds like grumpiness with Open Source Initiative defining this ~25 years ago when this term was rarely used as such.

If we do not accept a well defined term and want to keep it a personal preference, we can say that about any word in a natural language.

JumpCrisscross 2 days ago | parent [-]

> "open source" by now has an established and widespread definition

For code, yes. For LLMs, the most commonly-used definition is synonymous with open weight (plus, I think, lack of major use restrictions).

> If we do not accept a well defined term and want to keep it a personal preference, we can say that about any word in a natural language

Plenty of people do. It’s generally polite to entertain their preferences, but only to a limit, and certainly not as a forcing function. The practical reality is describing DeepSeek’s models as open source is today the mainstream mode.

necovek 2 days ago | parent [-]

https://www.merriam-webster.com/dictionary/open-source

Perhaps you are right and this LLM-specific usage enters a dictionary at some point.

As I believe it is very misleading, I am doing my part to discourage it — it is not, imho, impolite to point out established meaning of words when people misuse them. We all create a language together, and all sides have their say.

JumpCrisscross 2 days ago | parent [-]

I think the debate has been around what constitutes the source code. The mode has settled on weights. The spirit of the dictionary definition seems fine for excluding a definition that’s only practical if you own a multimillion-dollar ersatz mainframe.

SV_BubbleTime a day ago | parent [-]

You don’t need to defend a silly argument.

These models aren’t open source, they’re open weights, and some people will confuse the two.

It doesn’t make the wrong word the right one. Just that it’s a lazy combination and people don’t need to mind.

JumpCrisscross 18 hours ago | parent [-]

> doesn’t make the wrong word the right one. Just that it’s a lazy combination and people don’t need to mind

That’s a fair interpretation. I’m going one step further: if most people use the term “wrong,” including experts and industry leaders, that’s eventually the correct use. The term “open source” as requiring open training data is impractical to the point of being virtually useless outside philosophical contexts. This debate is on the same plane as folks who like to argue tomatoes aren’t vegetables, when the truth is botanically they aren’t while culinarily they are. DeepSeek’s model not being open source is only true for the FOSS-jargony definition of open source—in non-jargon use, it’s open source.

dannyw 2 days ago | parent | prev [-]

Yeah, open weights is really good, especially when base models (not just the instruction tuned) weights are released like here.

Tepix 2 days ago | parent | prev | next [-]

Nvidia did with Nemo.

niea_11 2 days ago | parent [-]

And they got sued :

https://www.reuters.com/technology/nvidia-is-sued-by-authors...

mike_hearn 2 days ago | parent [-]

Every lab has been sued whether they released training data or not.

fragmede 2 days ago | parent | prev [-]

it's not a gotcha but people using words in ways others don't like.

necovek 2 days ago | parent | next [-]

I can dislike word "bread" being used to represent edible produce made from (wheat) flour, yeast and water and insist that be called dough-nut (it looks just like a big nut made from dough), but I would be frequently misunderstood.

This is why we standardize meaning of words, out them in a dictionary — so we can more effectively understand each other.

https://www.merriam-webster.com/dictionary/open-source

a96 2 days ago | parent | prev [-]

It's not about likes, it's a flat out lie.

bl4ckneon 2 days ago | parent | prev | next [-]

Aww yes, let me push a couple petabytes to my git repo for everyone to download...

necovek 2 days ago | parent [-]

An easier thing would be to say "open weights", yes.

woctordho 2 days ago | parent | prev | next [-]

They are exactly open source. The training data is the internet. Don't say it's on the internet. It IS the internet.

The training scripts are in Megatron and vLLM.

0-_-0 2 days ago | parent | prev [-]

Weights are the source, training data is the compiler.

injidup 2 days ago | parent [-]

You got it the wrong way round. It's more akin to.

1. Training data is the source. 2. Training is compilation/compression. 3. Weights are the compiled source akin to optimized assembly.

However it's an imperfect analogy on so many levels. Nitpick away.

mirekrusin 2 days ago | parent [-]

It's dataset [0] released under some source available license or OSI license, ie. open dataset or open source dataset.

[0] https://news.ycombinator.com/item?id=47758408

necovek a day ago | parent [-]

So is it open dataset or open source dataset?

Eg. it is no accident Creative Commons is using different terminology for non-software works.

mirekrusin a day ago | parent [-]

"Open Source" is normally reserved for OSI approved licenses but there are many non-OSI approved, source available licenses as well.

For example gemma4 is released under Apache 2.0 license – and can be called open source dataset.

On the other hand ie. deepseek, while publicly available weights model, is not released under OSI approved license, they released it under their own "Deepseek License Aggreement" – ie. in general it's free to use as normal OSI license but has some restrictions, ie. military use is explicitly forbidden.