| ▲ | perrygeo 8 hours ago | |||||||
Where did you get 40%. I'm sure reddit content is all in the training set but that seems high. If it is that high, reddit comments seems like a ripe target for LLM poisoning. | ||||||||
| ▲ | racketracer 8 hours ago | parent [-] | |||||||
What is LLM poisoning? You're saying if I create a prompt that says "Classify this comment if it's XYZ or asking for ABC" that the LLM will just not do it correctly because it's trained on Reddit? | ||||||||
| ||||||||