Remix.run Logo
dandersch 3 days ago

> Small quantities of poisoned training data can significantly damage a language model.

Is this still accurate?

embedding-shape 3 days ago | parent [-]

Probably always be true, but also probably not effective in the wild. Researchers will train a version, see results are off, put guards against poisoned data, re-train and no damage been done to whatever they release.

d-lisp 3 days ago | parent [-]

How would they put guards against poisoned data ? How would they identify poisoned data if there are a lot/obfuscated ?