| ▲ | mexicocitinluez 5 hours ago | |
So you went from "you can't curate everything" to "they're simply projecting their own ones onto everyone else". That's a pretty big leap in logic isn't it? That because you can't curate everythign, then by default, you're JUST curating your own views? | ||
| ▲ | orbital-decay 5 hours ago | parent [-] | |
This comment assumes you're familiar with LLM training realities. Preference is transferred to the model in both pre and post training. Pretraining datasets are curated to an extent (implicit transfer), but they're simply too vast to be fully controlled, and need to be diverse, so you can't throw too much out or the model will be dumb. Post-training datasets and methods are precisely engineered to make the model useful and also steer it in the desired direction. So there are always two types of biases - one is picked up from the ocean of data, another (alignment training, data selection etc) is forced onto it. | ||