Remix.run Logo
perrygeo 7 hours ago

LLM poisoning refers to feeding the model false information during training. Anti-AI folks are openly talking about intentionally flooding the internet with garbage to reduce the quality of the models. Reddit just provides a convenient and barely moderated forum for them to spread misinformation. And it doesn't take much: https://www.anthropic.com/research/small-samples-poison