| ▲ | NoboruWataya a day ago | ||||||||||||||||
To me it reads like it was written by a non-native English speaker, in a way that most AI slop doesn't. Maybe an LLM was used to translate? | |||||||||||||||||
| ▲ | moritzwarhier a day ago | parent [-] | ||||||||||||||||
Edit: looked into it and the first paragraph doesn't exhibit any LLM "tells" to me, so I'd rather read it in full or research about the source than judge it. Leaving the rest of my comment because it is my opinion on the argument of using LLMs to rewrite text. I don't know if this was done here. ===== I haven't read TFA, and this explanation comes up again and again, but I'd rather read broken English (or German), than the "enhanced" version. Considering that LLM rewriting using non-specialized tools is more often than not far from preserving intent and meaning of any input, I'd say I think this applies even more for non-native speakers. You wouldn't say "maybe the author is not a physician, so they might have used an LLM to fill in the Latin terms and medication doses" or "not a scientist, used ChatGPT to do the statistics using my notebook of empirical data" either. Language has value and simple language or slightly wrong grammar is preferable to a verbose and glossy distortion of the input. Sorry if this doesn't apply, since I didn't click the link. And yeah I'm sure my comment is verbose and partially wrong in my English, but well. | |||||||||||||||||
| |||||||||||||||||