▲ | ACCount37 2 days ago | |
"Model collapse" isn't real. It's a laboratory failure mode that doesn't happen in real world environments. It's popular because some people latched onto the idea - desperately wanting something to stop the AI tech from advancing. It, quite obviously, doesn't stop the AI tech from advancing. Now, you can write an entire research paper on why model collapse happens or fails to happen. But a simple way to think of it is: looping AI onto itself multiple times amplifies that AI's own deficiencies, distortions and idiosyncrasies - until, after enough iterations, they come to completely dominate its outputs. This doesn't apply at all to training an LLM on Whisper outputs that are, in turn, based on human-generated videos. The LLM will inherit some Whisper quirks, but most of the data in Whisper outputs comes from the videos themselves. |