▲ | refulgentis 2 days ago | ||||||||||||||||
Law is hard! In general, the de facto status quo is: 1. For whatever reason*, large swaths of LLM output copy-pasted is easily detectable. 2. If you're restrained, polite, with an accurate signal on this, you can indicate you see this, and won't get downvoted heavily. (ex. I'll post "my internal GPT detector went off, [1-2 sentence clipped version of why I think its wrong even if it wasn't GPT]") 3. People tend to downvote said content, as an ersatz vote. In general, I don't think there needs to be a blanket ban against it, in the sense of I have absolutely no problem with LLM output per se, just lazy invocation of it, ex. large entry-level arguments that were copy-pasted. i.e. I've used an LLM to sharpen my already-written rushed poor example, which didn't result in low-perplexity, standard-essay-formatted, content. Additionally, IMHO it's not bad, per se, if someone invests in replying to an LLM. The fact they are replying indicates its an argument worth furthering with their own contribution. * a strong indicator that a fundamental goal other than perplexity minimization may increase perceived quality | |||||||||||||||||
▲ | og_kalu 2 days ago | parent | next [-] | ||||||||||||||||
The reason is not strange or unknown. The text completion GPT-3 from 2020 often sounds more natural than 4. The reason is the post training processes. Models are more or less being trained to sound like that during RLHF. Stilted, robotic, like a good little assistant. Open AI, Anthropic have said as much. It's not a limitation of the loss function or even state of the art. | |||||||||||||||||
| |||||||||||||||||
▲ | aspenmayer 2 days ago | parent | prev | next [-] | ||||||||||||||||
To me, the essence of online discussion boards is a mutual exchange of ideas, thoughts, and opinions via a shared context, all in service of a common goal of a meeting of minds. When one party uses LLMs, it undermines the unspoken agreement to post “authentic” content as opposed to “unauthentic” content. Authenticity in this context is not just a “nice to have,” but is part and parcel to the entire enterprise of participating in a shared experience and understanding via knowledge transfer and cross-cultural exchange. I can see that you care enough to comment here in a “genuine” and good faith manner, as I recognize your username and your posting output as being in good faith. That being said, an increase in LLM-generated content on HN generally is likely to result in an associated increase in the number of bad actors using LLMs to advance their own ends. I don’t want to give bad actors any quarter, whether that be wiggle room or excuses about Guidelines or on-topic-ness, or any other justification for why self-proclaimed “good” actors think that using LLMs is okay when they do it, but not when bad actors do it, because doing so gives cover to bad actors to do so, as long as they don’t get caught. | |||||||||||||||||
| |||||||||||||||||
▲ | vunderba 2 days ago | parent | prev [-] | ||||||||||||||||
Additionally, IMHO it's not bad, per se, if someone invests in replying to an LLM. The fact they are replying indicates its an argument worth furthering with their own contribution And once those floodgates are open, what exactly makes you think that they're not just also using an LLM to generate their "contribution"? | |||||||||||||||||
|