| ▲ | dang 8 hours ago | |||||||
I need to say something about this but it might have to be later as I have to run out the door shortly... The short version is that we included it to protect users who don't realize how much damage they're doing to their reception here when they think "I'll just run this through ChatGPT to fix my grammar and spelling". I've seen many cases of people getting flamed for this and I don't want more vulnerable users—e.g. people worried about their English—to get punished for trying to improve their contributions. Certainly that would apply to disabled users as well, though for different reasons. Here are some past cases of these interactions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu.... Edit: uni_baconcat makes the point beautifully: https://news.ycombinator.com/item?id=47346032. Most rules in https://news.ycombinator.com/newsguidelines.html have a lot of grey area, and how we apply them always involves judgment calls. The ones we explicitly list there are mostly so we have a basis for explaining to people the intended use of the site. HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them precise. In other words yes, that bit needs to be applied cautiously and with care, and in this way it's similar to the other rules. Trying to get that caution and care right is something we work at every day. | ||||||||
| ▲ | edanm 7 hours ago | parent | next [-] | |||||||
That makes this more ok, IMO. I'm otherwise against "AI-edited" being part of the rules — it's very hard to draw the line (does asking an AI for synonyms of a word count?). AI-editing is especially a valuable tool for non-native-English speakers or similar. | ||||||||
| ▲ | Kim_Bruning 8 hours ago | parent | prev | next [-] | |||||||
I was close to one such case, and I really appreciate the care and caution you and Tom applied. | ||||||||
| ▲ | BeetleB 7 hours ago | parent | prev | next [-] | |||||||
Anything I post here is always in my own voice - even when I use an LLM. 95% of the times grammar/spelling is fixed, it's because my brain lapsed while typing, not because I don't know the grammar well and am using LLM to shape my voice. I would wager that this use case is much more prevalent than ones where the LLM changed the comment significantly enough to change one's voice. I never copy/paste from an LLM into HN. Everything is typed by myself (and I never "manually" copy LLM content). I don't have any automatic tools for inserting LLM content here.[1] Always, always, always keep in mind that you don't notice these positive use cases, because they are not noticeable by design. So the problematic "clearly LLM" comments you see may well be a small minority of LLM-assisted comments. Don't punish the (majority) "good" folks to limit the few "bad" ones. Lastly, I often wish we had a rule for not calling out others' comments as "AI slop" or the like.[2] It just leads to pointless debates on whether an LLM was used and distracts far more than the comment under question. I'm sure plenty of 100% human written comments have been labeled as LLM generated. [1] The dictation one is a slight exception, and I use it only occasionally when health issues arise. [2] Probably OK for submissions, but not comments. | ||||||||
| ▲ | Teever 5 hours ago | parent | prev | next [-] | |||||||
I've thought about fine-tuning a model on the corpus of your HN posts and then offering a service that would allow the user to paste their message into a text box and the Dangified version of their comment would pop out in another box next to it. I was thinking of calling this service "Dang It." You say you want hear posts in other people's voices but I'm pretty sure that if I did this that the people who used it would find greater acceptance of their comments than if they just posted them as they originally wrote them. | ||||||||
| ||||||||
| ▲ | 7 hours ago | parent | prev [-] | |||||||
| [deleted] | ||||||||