| ▲ | moritzwarhier a day ago | |||||||
Edit: looked into it and the first paragraph doesn't exhibit any LLM "tells" to me, so I'd rather read it in full or research about the source than judge it. Leaving the rest of my comment because it is my opinion on the argument of using LLMs to rewrite text. I don't know if this was done here. ===== I haven't read TFA, and this explanation comes up again and again, but I'd rather read broken English (or German), than the "enhanced" version. Considering that LLM rewriting using non-specialized tools is more often than not far from preserving intent and meaning of any input, I'd say I think this applies even more for non-native speakers. You wouldn't say "maybe the author is not a physician, so they might have used an LLM to fill in the Latin terms and medication doses" or "not a scientist, used ChatGPT to do the statistics using my notebook of empirical data" either. Language has value and simple language or slightly wrong grammar is preferable to a verbose and glossy distortion of the input. Sorry if this doesn't apply, since I didn't click the link. And yeah I'm sure my comment is verbose and partially wrong in my English, but well. | ||||||||
| ▲ | NoboruWataya a day ago | parent [-] | |||||||
Totally agree, my point was that I didn't get the impression that the article was LLM-generated, for that reason. The commenter I was replying to seemed to think the article was obviously LLM-generated, so LLM-aided translation was one possible explanation, but I don't have any particular reason to believe that's what the author actually did. | ||||||||
| ||||||||