| ▲ | A_D_E_P_T 4 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
It would be nice if there were an easier way to detect and filter those "reply guys." If LLMs were forced to watermark their output (possibly by using randomly-selected nonstandard ASCII characters in inconspicuous places, like "s" instead of "s") it would have been trivial, but that ship has sailed. The most anybody can do is train another LLM to find offenders and make a list. Bot vs bot. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | ossa-ma 4 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Yeah exactly, it's best to keep track and be aware of common tropes used in AI writing so that you don't end up 5 responses deep and emotionally invested in a conversation before you realise you've been fooled into speaking to a bot. I built this tool primarily to identify AI writing in articles and posts but it's proven useful for comments/responses too: https://tropes.fyi/vetter | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | bambax 3 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I'm sure there are other tells, like delay between post and reply, or time of day, etc. Epidemiology of bots is just getting started but the tools have to have detectable patterns. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||