Remix.run Logo
benterix 2 days ago

So?

otherme123 2 days ago | parent [-]

I don't know much about LLM training, but previous AI needed clean data to train. You shouln't train on generated data.

For example, you had a classifier that works at 95% precission trained with carefully labeled data. Then, to train the next version you download 1Tb of images, classify with your previous model, and use that to retrain. Do you expect to get better than 95%, or are you poisoning your model?

I'm asking: can you do that with LLM? Feed them data that's known to be 95% precise at best? I did some Whisper, and usually get runs of words, like "bye bye bye bye bye bye", despite being only said once. Should I use that kind of data to train a LLM?

I saw this experiment where an LLM was feed an image and asked to make the same image. Then repeat with the generated image. After ten or so cycles, the content (a human head photo) was barely recognizable.

orbital-decay 21 hours ago | parent | next [-]

The reality of working with humongous datasets is they're always bootstrapped like this, in multiple steps. In LLMs in particular, the entire post-training step is always done on synthetic data. There are ways to avoid failure modes typical for that (like model collapse), you need much less real data to keep the model in check than you probably think.

electroglyph 2 days ago | parent | prev [-]

Phi models are notorious for using mostly synthetic data