| ▲ | tza54j 2 hours ago | |
We are here in a clean room implementation thread, and verbatim copies of entire works are irrelevant to that topic. It is enough to have read even parts of a work for something to be considered a derivative. I would also argue that language models who need gargantuan amounts of training material in order to work by definition can only output derivative works. It does not help that certain people in this thread (not you) edit their comments to backpedal and make the followup comments look illogical, but that is in line with their sleazy post-LLM behavior. | ||
| ▲ | ben_w an hour ago | parent [-] | |
> It is enough to have read even parts of a work for something to be considered a derivative. For IP rights, I'll buy that. Not as important when the question is capabilities. > I would also argue that language models who need gargantuan amounts of training material in order to work by definition can only output derivative works. For similar reasons, I'm not going to argue against anyone saying that all machine learning today, doesn't count as "intelligent": It is perfectly reasonable to define "intelligence" to be the inverse of how many examples are needed. ML partially makes up for being (by this definition) thick as an algal bloom, by being stupid so fast it actually can read the whole internet. | ||