▲ | lynndotpy 6 days ago | |||||||
AI-generated content is rarely published with the intention of being informative. * Something being apparently AI-generated is a strong heuristic that something isn't worth reading. We've been reading highly-informative articles with "bad English" for decades. It's okay and good to write in English without perfect mastery of the language. I'd rather read the source, rather than the output of a txt2txt model. * edit -- I want to clarify, I don't mean to imply that the author has ill will or intent to misinform. Rather, I intend to describe the pitfalls of using an LLM to adapt ones text, inadvertently adding a very strong flavor of spam to something that is not spam. | ||||||||
▲ | davrosthedalek 6 days ago | parent [-] | |||||||
True, but there are many more people that speak no English, or so badly that an article would be hard to understand. I face this problem now with the classes I teach. It's an electronics lab for physics majors. They have to write reports about the experiments they are doing. For a large fraction, this task is extraordinary hard not because of the physics, but because of writing in English. So for those, LLMs can be a gift from heaven. On the other hand, how do I make sure that the text is not fully LLM generated? If anyone has ideas, I'm all ears. | ||||||||
|