Remix.run Logo
zahlman 8 hours ago

They look similar. In my experience, they do not read similar at all. You have to pay attention and actually try to appreciate what you're reading. Then, if you try and fail, it might not be your fault.

altairprime 6 hours ago | parent | next [-]

They do not read similiar to readers, an appellation not necessarily applicable to large swaths of the U.S. right now. Evidence of English composing skills is being assumed as AI because few younger than my middle-aged self can conceive of writing composition at the skill level demonstrated by AI being a human skill.

(This isn’t necessarily true for first world countries, which is why I describe it for the non-U.S. folks in particular.)

nomel 8 hours ago | parent | prev | next [-]

What effort was put into their prompt to make them read similarly? There could very well be a selection bias, where you're only "seeing" AI when it's obvious/default prompt.

zahlman 7 hours ago | parent [-]

Sure. There's always the possibility that LLM-generated text goes undetected, especially if false positives have a cost. But this is fine. Of course putting more effort into prompting makes the result harder to detect. It also, naturally, reduces the annoyance of LLM-generated comments. And because of the effort involved, it naturally cuts down on the volume of such comments.

Arguably it cannot avoid all the possible harm. For example, someone might generate a comment that makes false statements but cannot reasonably be detected as LLM-generated except perhaps by people who know (or determine) that the statements are false. But from a policy perspective, this is again not really different from if someone just decided to lie.

8 hours ago | parent | prev [-]
[deleted]