Remix.run Logo
NitpickLawyer 4 hours ago

It is an opinion piece. By a dude working as a "Professor of Pharmaceutical Technology and Biomaterials at the University of Ferrara".

It has all the tropes of not understanding the underlying mechanisms, but repeating the common tropes. Quite ironic, considering what the author's intended "message" is. Jpeg -> jpeg -> jpeg bad. So llm -> llm -> llm must be bad, right?

It reminds me of the media reception of that paper on model collapse. "Training on llm generated data leads to collapse". That was in 23 or 24? Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years. That's not how any of it works. Yet everyone has an opinion on how bad it works. Jesus.

It's insane how these kinds of opinion pieces get so upvoted here, while worth-while research, cool positive examples and so on linger in new with one or two upvotes. This has ceased to be a technical subject, and has moved to muh identity.

simonw 4 hours ago | parent | next [-]

Yeah, reading the other comments on this thread this is a classic example of that Hacker News (and online forums in general) thing where people jump on the chance to talk about a topic driven purely by the headline without engaging with the actual content.

(I'm frequently guilty of that too.)

ghywertelling 4 hours ago | parent [-]

Even if that isn't the case, isn't it the fact the AI labs don't want their models to be edgy in any creative way, choose a middle way (buddhism) so to speak. Are there AI labs who are training their models to be maximally creative?

PurpleRamen 4 hours ago | parent | prev [-]

> Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years.

Maybe because researchers learned from the paper to avoid the collapse? Just awareness alone often helps to sidestep a problem.

NitpickLawyer 3 hours ago | parent [-]

No one did what the paper actually proposed. It was a nothing burger in the industry. Yet it was insanely popular on social media.

Same with the "llms don't reason" from "Apple" (two interns working at Apple, but anyway). The media went nuts over it, even though it was littered with implementation mistakes and not worth the paper it was(n't) printed on.

dsf2d 2 hours ago | parent [-]

Who cares? This is a place where you should be putting forth your own perspective based on your own experience. Not parotting what someone else already wrote.