| ▲ | PurpleRamen 3 hours ago | |||||||
> Yet we're not seeing any collapse, despite models being trained mainly on synthetic data for the past 2 years. Maybe because researchers learned from the paper to avoid the collapse? Just awareness alone often helps to sidestep a problem. | ||||||||
| ▲ | NitpickLawyer 3 hours ago | parent [-] | |||||||
No one did what the paper actually proposed. It was a nothing burger in the industry. Yet it was insanely popular on social media. Same with the "llms don't reason" from "Apple" (two interns working at Apple, but anyway). The media went nuts over it, even though it was littered with implementation mistakes and not worth the paper it was(n't) printed on. | ||||||||
| ||||||||