| ▲ | johndough 8 hours ago | |
I am not saying that completely disallowing AI is the right decision. But if you see text that is clearly generated by AI and does not make any sense, it sure would be nice if you could just tell the students to actually read their sources instead of having to argue with them why they should do so. Similarly, I can see why HN moderators do not want to argue with the 100s of spam posters per day on /newest. Anyway, my university did not ban AI, and now most students have degraded to proxies between teaching assistants and ChatGPT. | ||
| ▲ | Adiqq 6 hours ago | parent [-] | |
On the other hand you can make good, but controversial argument and if you use AI in any way, it might be rejected by moderator, just because some places have strict rules on AI. In some cases it might be rejected, even if no AI was involved, if any fragment of your text might look like not written by human and if they don't like your text. At certain point it's no longer about AI specifically, but about power and showing who makes decisions. I agree that there might be some threshold for obvious spam, but if you're making argument in good faith and you don't claim to have authority on some matter, there will be always people that think differently or disagree with you, because they have different interpretation or they need better sources, more evidence. It's actually typical, because different people use different perspectives, different assumptions, different tools. I don't believe that rules should be used to silence people that have different opinions and that's the biggest risk I see, because penalty for not following such rules, which are hard to measure correctly, creates power imbalance. At some points it becomes dogma, not fair debate and not everyone likes to stick to dogma and it's hard to do creative or innovative work, if your work has to meet strict, but subjective, possibly incomplete criteria, to be considered valid work at all. | ||