| ▲ | zahlman 7 hours ago | |
Sure. There's always the possibility that LLM-generated text goes undetected, especially if false positives have a cost. But this is fine. Of course putting more effort into prompting makes the result harder to detect. It also, naturally, reduces the annoyance of LLM-generated comments. And because of the effort involved, it naturally cuts down on the volume of such comments. Arguably it cannot avoid all the possible harm. For example, someone might generate a comment that makes false statements but cannot reasonably be detected as LLM-generated except perhaps by people who know (or determine) that the statements are false. But from a policy perspective, this is again not really different from if someone just decided to lie. | ||