|
| ▲ | crakhamster01 an hour ago | parent | next [-] |
| I had a similar reaction to OP for a different post a few weeks back - I think some analysis on the health economy. Initially as I was reading I thought - "Wow, I've never read a financial article written so clearly". Everything in layman's terms. But as I continued to read, I began to notice the LLM-isms. Oversimplified concepts, "the honest truth" "like X for Y", etc. Maybe the common factor here is not having deep/sufficient knowledge on the topic being discussed? For the article I mentioned, I feel like I was less focused on the strength of the writing and more on just understanding the content. LLMs are very capable at simplifying concepts and meeting the reader at their level. Personally, I subscribe to the philosophy of - "if you couldn't be bothered to write it, I shouldn't bother to read it". |
| |
| ▲ | ajkjk 30 minutes ago | parent [-] | | Alternate theory... a few months into the LLMism phenomenon, people are starting to copy the LLM writing style without realizing it :( |
|
|
| ▲ | weird-eye-issue 4 hours ago | parent | prev | next [-] |
| I think you're just hallucinating because this does not come across as an AI article |
| |
| ▲ | lovecg 3 hours ago | parent | next [-] | | I see quite a few: “what X actually is” “the X reality check” Overuse of “real” and “genuine”: > The real story is actually in the article. … And the real issue for Cursor … They have real "brand awareness", and they are genuinely better than the cheaper open weights models - for now at least. It's a real conundrum for them. > … - these are genuinely massive expenses that dwarf inference costs. This style just screams “Claude” to me. | |
| ▲ | hansvm 3 hours ago | parent | prev | next [-] | | It was almost certainly at least heavily edited with one. Ignoring the content, every single thing about the structure and style screams LLM. | |
| ▲ | lelanthran 2 hours ago | parent | prev | next [-] | | > I think you're just hallucinating because this does not come across as an AI article It has enough tells in the correct frequency for me to consider it more than 50% generated. | |
| ▲ | NetOpWibby 4 hours ago | parent | prev [-] | | Name checks out |
|
|
| ▲ | Erem 4 hours ago | parent | prev | next [-] |
| I don’t see the usual tells in this essay |
|
| ▲ | 152334H 3 hours ago | parent | prev | next [-] |
| People care, when they can tell. Popular content is popular because it is above the threshold for average detection. In a better world, platforms would empower defenders, by granting skilled human noticers flagging priority, and by adopting basic classifiers like Pangram. Unfortunately, mainstream platforms have thus far not demonstrated strong interest in banning AI slop. This site in particular has actually taken moderation actions to unflag AI slop, in certain occasions... |
|
| ▲ | rhubarbtree 2 hours ago | parent | prev [-] |
| It is certainly very obvious a lot of the time. I wonder if we revisited the automated slop detection problem we’d be more successful now… it feels like there are a lot more tells and models have become more idiosyncratic. |
| |
| ▲ | weird-eye-issue an hour ago | parent [-] | | Tons of companies do this already. It's not like this is a problem that nobody is constantly revisiting... |
|