▲ | hcs 7 days ago | |||||||||||||||||||||||||||||||||||||||||||
If I'm reading this right yes the training was fair use, but I was responding (unclearly) to the claim that the pirated books weren't used to train commercially released LLMs. The judge complained that it wasn't clear what was actually used, from the June order https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/... [pdf]: > Notably, in its motion, Anthropic argues that pirating initial copies of Authors’ books and millions of other books was justified because all those copies were at least reasonably necessary for training LLMs — and yet Anthropic has resisted putting into the record what copies or even sets of copies were in fact used for training LLMs. > We know that Anthropic has more information about what it in fact copied for training LLMs (or not). Anthropic earlier produced a spreadsheet that showed the composition of various data mixes used for training various LLMs — yet it clawed back that spreadsheet in April. A discovery dispute regarding that spreadsheet remains pending. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | rise_before_sun 7 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
Thanks for this info. I was looking for which pirated books were used for which model. Ethically speaking, if Anthropic (a) did later purchase every book it pirated or (b) compensated every author whose book was pirated, would it absolve an illegally trained model of its "sins"? To me, the taint still remains. Which is a shame, because it's considered the best coding model so far. | ||||||||||||||||||||||||||||||||||||||||||||
|