| ▲ | zbentley 10 hours ago | |
Why would "human originated" be a better place to draw the line than "no generated/AI-edited comments"? Like, I'm sure that AIs technically can write non-crap HN comments, but they rarely do. Even if it was less rare, the community that resulted from fostering AI-generated content would be unappealing to a lot of people, myself included. The fact that information here is the result of real people with real human opinions conversing is at least as important to me as the content being posted. | ||
| ▲ | Kim_Bruning 10 hours ago | parent [-] | |
To begin with, some people have handicaps and use AI for assist. Other times people use AI for research. Finally, in general, when it comes to guidelines, making the lines slightly fuzzy makes enforcement more practical and believable. It'd be silly if the rule gets interpreted such that people aren't allowed to do research with modern tools, and only gut takes are permitted. I'm sure that's not the intent! I think the important part is to have the human voice come through, rather than -say- force humans to run their text through an ai-detector first. (Itself an ai editing tool!) See also : https://news.ycombinator.com/item?id=47290457 "Training students to prove they're not robots is pushing them to use more AI" | ||