| ▲ | panny 8 hours ago |
| In other news, 40% of your LLM's training data is reddit posts. Derive from that what you will. |
|
| ▲ | perrygeo 8 hours ago | parent [-] |
| Where did you get 40%. I'm sure reddit content is all in the training set but that seems high. If it is that high, reddit comments seems like a ripe target for LLM poisoning. |
| |
| ▲ | racketracer 8 hours ago | parent [-] | | What is LLM poisoning? You're saying if I create a prompt that says "Classify this comment if it's XYZ or asking for ABC" that the LLM will just not do it correctly because it's trained on Reddit? | | |
| ▲ | perrygeo 7 hours ago | parent [-] | | LLM poisoning refers to feeding the model false information during training. Anti-AI folks are openly talking about intentionally flooding the internet with garbage to reduce the quality of the models. Reddit just provides a convenient and barely moderated forum for them to spread misinformation. And it doesn't take much: https://www.anthropic.com/research/small-samples-poison |
|
|