| ▲ | claythedesigner 8 hours ago |
| The piece hit differently, reading it as someone who is autistic. The anxiety the author describes, having your natural way of communicating flagged as wrong and being pressured to sand down the parts of yourself that are most distinctly you, that's not a new problem for a lot of us. Neurodiverse people have been running this gauntlet forever. Your pacing is too flat or too intense. Your vocabulary is too formal or too casual. You don't make eye contact correctly. You're either masking so hard you're invisible, or you're visibly yourself, and people assume something is broken. The bitter irony the author lands on: the only way to seem human is to pass your writing through an LLM. That maps onto something a lot of us already live. The only way to seem normal is to perform a version of yourself that isn't quite you. |
|
| ▲ | zahlman 4 hours ago | parent | next [-] |
| > The bitter irony the author lands on: the only way to seem human is to pass your writing through an LLM. (FWIW, some people consider this style of colon use an LLM-ism.) I appreciate where you're coming from, though. As bland as LLM output can be, it seems to read more human to people because it's more average. (Although I can't really fathom seeing the neurodivergent as not human; neurodiversity is about the most human trait I can imagine. cf. https://quoteinvestigator.com/2022/11/05/think-alike/ .) Long before the rise of ChatGPT, it seems a lot of people were immersed in a culture where "improving" your writing with tools like Grammarly was considered more or less mandatory. And it seems like people read less nowadays, certainly when it comes to attempts at good writing for writing's sake. Overall I fear the art of natural language communication is in decline. |
|
| ▲ | byproxy 8 hours ago | parent | prev | next [-] |
| As this post has been (to my sensibilities) obviously composed by an LLM, I can tell you: this does not read "human." |
| |
| ▲ | teekert 7 hours ago | parent | next [-] | | "AI use detection" is, like any test, not without cost. Meaning that, as a teacher, accusing a student of using an LLM, it may be prudent to consider the cost of a "false positive" accusation. I've seen a couple of examples now where students find sudden spurts of motivation and show unexpected talent on an assignment, to be accused of AI use after handing it in. One should ask oneself: How many insults to the intelligence and creativity of an unexpectedly excelling student (that hasn't used AI) is it worth catching the shortcut-taking, LLM-using student? Is it 1/10? 1/1000? How much "demotivation of an unexpectedly excelling student" is the "rightful punishment of the cheating LLM using student" worth? And what is the exact cost of a false negative (letting the LLM using student off the hook)? In other words, where on the Receiver Operating Characteristic (ROC) curve do you want to sit, as a teacher? I imagine it's quite the dilemma. | | |
| ▲ | macintux 7 hours ago | parent | next [-] | | ~30 years ago I sat down with two students and accused them of copying each others’ work, because they both made the same amusing mistake: they called their C functions without passing arguments, but they declared their variables in such a way that the values would coincidentally be in the right place on the stack. I have to imagine debugging their own code was a mystery. They indicated that while they worked closely together while learning the material, they weren’t stealing from each other. I believed them then, and still believe them now, but I’m so glad I don’t have to deal with today’s AI world. | |
| ▲ | socks an hour ago | parent | prev [-] | | The non-LLM version of this happened to me in a high school English class, back in the 2010's. I was accused of turning in a downloaded story from the internet, but in reality, I had pushed past the incredible barrier I usually felt at the start of a task, got into the flow, and started enjoying it. I'm not sure if it had any lasting effects. Maybe a burning hatred of Grammarly ads. |
| |
| ▲ | 7 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | TZubiri 5 hours ago | parent | prev [-] | | >To intentionally misspell a word makes me [sic], but it must be done. LLM killed traditional poetry, what you are now seeing is post-LLM poetry. Maybe you missed it, but this is clearly not an LLM, what prompt would even produce that. | | |
| ▲ | zahlman 4 hours ago | parent [-] | | I can already see it playing out. Some day (maybe soon) LLMs will come up with such quirks; I (and perhaps you?) will continue to insist that this does not make them "conscious" or "AGI" or "persons" or what-have-you; and I will be accused of goalpost-shifting. |
|
|
|
| ▲ | TZubiri 5 hours ago | parent | prev [-] |
| But changing the way we communicate and present ourselves to prove we are not malicious (or disreputable) actors has always been a thing |