| ▲ | ayoung5555 2 hours ago | |
As much as the general public seems to be turning against AI, people only seem to care when they're aware it's AI. Those of us intentionally aware of it are better tuned to identify LLM-speak and generated slop. Most human writing isn't good. Take LinkedIn, for example. It didn't suddenly become bad because of LLM-slop posts - humans pioneered its now-ubiquitous style. And now even when something is human-written, we're already seeing humans absorb linguistic patterns common to LLM writing. That said, I'm confident slop from any platform with user-generated content will eventually fade away from my feeds because the algorithms will pick up on that as a signal. (edit to add from my feeds) What concerns me most is that there's absolutely no way this isn't detrimental to students. While AI can be a tool in STEM, I'm hearing from teachers among family and friends that everything students write is from an LLM. Leaning on AI to write code I'd otherwise write myself might be a slight net negative on my ability to write future code - but brains are elastic enough that I could close an n month gap in 1/2n months time or something. From middle school to university, students are doing everything for the first time, and there's no recovering habits or memories that never formed in the first place. They made the ACT easier 2 years ago (reduced # of questions) and in the US the average score has set a new record low every year since then. Not only is there no clear path to improvement, there's an even clearer path to things getting worse. | ||
| ▲ | unyttigfjelltol 33 minutes ago | parent [-] | |
I spent several years trying to get ground truth out of digital medical records and I would draw this parallel to AI slop: With traditional medical records, you could see what the practitioner did and covered because only that was in the record. With computerized records, the intent, thought process, most signal you would use to validate internal consistency, was hidden behind a wall of boilerplate and formality that armored the record against scrutiny. Bad writing on LinkedIn is self-evident. Everything about it stinks. AI slop is like a Trojan Horse for weak, undeveloped thoughts. They look finished, so they sneak into your field of view and consume whatever additional attention is required to finally realize that despite the slick packaging, this too is trash. So “AI slop,” in this worldview, is a complaint that historical signals of quality simply based on form, no longer are useful gatekeepers for attention. | ||