Remix.run Logo
Adiqq 9 hours ago

Isn't the whole point to understand? If the task is to write and you expect only final result, but you question if it looks legit enough, how is it fair judgement? People can deliver partial results and show progress as well, but you won't see it in some comments on the internet, but if something is expected to take many days, it's easy to show different stages of work. It's easy to accuse people of plagiarism or not thinking for themselves, and of course there are indicators when someone uses AI, but the problem is that you can't distinguish in reliable way, if something was created by AI or not.

Like, there is this computer game, authors used some models or something like that, generated by AI, but it was only used during prototyping and later it was replaced by proper models. No one would know about that, if authors would not tell about it. So, if someone writes in their own words what AI generated for him, is it still argument made by human or by AI? What if someone uses AI only as placeholder and replaces all that content, so you never actually see actual AI usage, but it was used in the process?

For me, premise that using AI in any form invalidates your work, starts with logical fallacy, so such arguments against using AI are weak. It's like saying that your work is wrong, because you used calculator, so your calculations can't be right, if done by machine, because it had to make mistake or that's wrong for ethical reasons or whatever.

Work generated by AI can be easily poor, because these models make mistakes and like to repeat in certain ways, but is it wrong that I'm writing comment with keyboard, instead of writing letters with pen? Is it wrong, when I use IDE or some CLI to write code with AI, instead of using vim and typing everything on my own? Is it wrong that someone uses spell-checking?

In the end it doesn't matter who seems smarter, when you're expected to use AI at work. Reality shows you actual expectations.

johndough 8 hours ago | parent [-]

I am not saying that completely disallowing AI is the right decision. But if you see text that is clearly generated by AI and does not make any sense, it sure would be nice if you could just tell the students to actually read their sources instead of having to argue with them why they should do so. Similarly, I can see why HN moderators do not want to argue with the 100s of spam posters per day on /newest.

Anyway, my university did not ban AI, and now most students have degraded to proxies between teaching assistants and ChatGPT.

Adiqq 6 hours ago | parent [-]

On the other hand you can make good, but controversial argument and if you use AI in any way, it might be rejected by moderator, just because some places have strict rules on AI. In some cases it might be rejected, even if no AI was involved, if any fragment of your text might look like not written by human and if they don't like your text.

At certain point it's no longer about AI specifically, but about power and showing who makes decisions.

I agree that there might be some threshold for obvious spam, but if you're making argument in good faith and you don't claim to have authority on some matter, there will be always people that think differently or disagree with you, because they have different interpretation or they need better sources, more evidence. It's actually typical, because different people use different perspectives, different assumptions, different tools. I don't believe that rules should be used to silence people that have different opinions and that's the biggest risk I see, because penalty for not following such rules, which are hard to measure correctly, creates power imbalance.

At some points it becomes dogma, not fair debate and not everyone likes to stick to dogma and it's hard to do creative or innovative work, if your work has to meet strict, but subjective, possibly incomplete criteria, to be considered valid work at all.