Remix.run Logo
Der_Einzige 2 days ago

Bold of you to assume that you will have any idea at all that an LLM generated a particular comment.

If I take a trick like those recommend by the authors of min_p (high temperature + min_p)[1], I do a great job of escaping the "slop" phrasing that is normally detectable and indicative of an LLM. Even more-so if I use the anti-slop sampler[2].

LLMs are already more creative than humans are today, they're already better than humans at most kinds of writing, and they are coming to a comment section near you.

Good luck proving I didn't use an LLM to generate this comment. What if I did? I claim that I might as well have. Maybe I did? :)

[1] https://openreview.net/forum?id=FBkpCyujtS

[2] https://github.com/sam-paech/antislop-sampler, https://github.com/sam-paech/antislop-sampler/blob/main/slop...

bjourne 2 days ago | parent [-]

Fascinating that very minor variations on established sampling techniques still generate papers. :) Afaik, neither top-p nor top-k sampling has conclusively been proven superior to good old-fashioned temperature sampling. Certainly, recent sampling techniques can make the text "sound different", but not necessarily read better. I.e., you're replacing one kind of bot generated "slop" with another.