▲ | neuroelectron 5 days ago | ||||||||||||||||||||||
Kind of too late for this. The ground truth of models has already been established. That' why we see models converging. They will automatically reject this kind of poison. | |||||||||||||||||||||||
▲ | nine_k 5 days ago | parent | next [-] | ||||||||||||||||||||||
This will remain so as long as the models don't need to ingest any new information. If most novel texts will appear with slightly more insidious nonsense mirrors, LLMs would either have to stay without this knowledge, or start respecting "nofollow". | |||||||||||||||||||||||
▲ | sevensor 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||
I don’t know about that. Have you seen their output? They’re poisoning their own well with specious nonsense text. | |||||||||||||||||||||||
▲ | blagie 5 days ago | parent | prev [-] | ||||||||||||||||||||||
It's competition. Poison increases in toxicity over time. I could generate subtly wrong information on the internet LLMs would continue to swallow up. | |||||||||||||||||||||||
|