| ▲ | croemer 9 hours ago |
| Reads at least partially like LLM writing, for example: > When code production gets cheap, the cost doesn't disappear. It migrates. > It was true then. It is unavoidably true now. |
|
| ▲ | superfrank 9 hours ago | parent | next [-] |
| Maybe it's just because I grew up spending way too much time on the internet, but I write like that and have since well before LLMs. As much as people like to attribute that style to AI, I don't think it's the dead giveaway that people act like it is. |
| |
| ▲ | lamename 8 hours ago | parent | next [-] | | Exactly. HN darling Paul Graham writes this way. I find the constant critique of punchy style a bit tiring. It would be more productive for the grandparent to think about the content and state an opinion. | | |
| ▲ | andrewf 8 hours ago | parent | next [-] | | I find the constant use of punchy style tiring. Sometimes a "punch in the face" is useful. Makes you re-evaluate things. Being punched in the face all day, every day, is just tiring. | |
| ▲ | amarant 8 hours ago | parent | prev [-] | | I swear the worst part about llms are their critics. |
| |
| ▲ | code_duck 8 hours ago | parent | prev | next [-] | | AI learned it from somewhere. | |
| ▲ | NuclearPM 8 hours ago | parent | prev | next [-] | | You’re absolutely right. And honestly, you’re really on to something. | | |
| ▲ | larsfaye 4 hours ago | parent [-] | | Yes—this is the part that nobody is telling you. That's the real shift that is happening. |
| |
| ▲ | kitsune1 8 hours ago | parent | prev [-] | | [dead] |
|
|
| ▲ | adrian_b 7 hours ago | parent | prev | next [-] |
| There is a reason why such a pattern is frequent in LLM-generated text. Any good human-written text that provides useful information is likely to highlight in this way or in equivalent ways the contrast between what the reader is expected to incorrectly believe and the reality. When the reader already knows what the text has to say, that text is superfluous. Therefore a text that provides new and unexpected information, so it is a useful text, must use some means to explain to the readers the errors of their ways. It may use simple superposition like "it is not ... it is ..." or it may be more verbose and add "but", "however", "nonetheless" etc. I believe that it is counterproductive to use this kind of pattern as a method for detection for AI-written texts, because it is normal for it to exists in useful human-written texts. What should be commented is whether that claim is true, i.e. whether indeed the second part with "it is ..." is true, or whether all of the pattern is superfluous, because none of the expected readers is not already aware that the first part with "it is not ..." is true. |
|
| ▲ | pizzly 8 hours ago | parent | prev | next [-] |
| Sometimes I feel like we are entering a new witch hunt era but for LLM generated text. Before clicking submit I am sometimes afraid that the text will be labled "LLM Generated" even though its not. Enough people classify you as a witch and you get burnt. Though in this case you only receive nasty comments, down votes and possible social media bans. Edit: In my observation it seems that people's opinions that do not agree with you get labeled as "AI Generated" more than opinions that agree with yours. |
| |
| ▲ | teiferer 7 hours ago | parent [-] | | We need to stand up against this by refusing to adapt. Let them scream. They are wrong. I refuse to tune texts into less-fine-tuned form just to avoid being labeled LLM output. |
|
|
| ▲ | csallen 7 hours ago | parent | prev | next [-] |
| > When code production gets cheap, the cost doesn't disappear. It migrates. I'm surprised people aren't taking the time to edit this very specific kind of phrasing out of their writing. It's such a common AI tell now that, even when writing by hand, I'd just avoid it entirely. Then again, I hated that LLMs co-opted the em-dash, and I refuse to stop using it, so I suppose I get it. |
| |
| ▲ | Terr_ 6 hours ago | parent | next [-] | | > to edit this very specific kind of phrasing out of their writing Even without touching moral/ethical/normative reasons, it's impractical. LLMs will continue to incorporate the most popular phrasings or grammars, and touchy readers will simply pivot to a new "telltale" du-jour. Eventually any personal or organic writing will be gone, as one twists themselves into an artificial form of "the inverse of the LLM." > Michael Bolton: "No way, why should I change? He's the one who sucks." | | |
| ▲ | csallen 3 hours ago | parent [-] | | > Even without touching moral/ethical/normative reasons, it's impractical. It's impractical to edit your AI-generated writing to put it in your own voice? People have been writing unique stuff for millennia. |
| |
| ▲ | teiferer 7 hours ago | parent | prev [-] | | Why would they have to? Just to avoid being accused of using a slop machine? If that is the only criticism you have against LLM produced text, then there is no problem. And I'm saying this as somebody who is strongly against LLM-generated content of this form. | | |
| ▲ | csallen 7 hours ago | parent [-] | | I have no problem with AI-generated text. But I do have somewhat of a problem with unedited text. Personally, I even take the time to edit my HN comments. And, for the same reason I'd have a problem watching the same episode of the same show every day, I have a problem with reading text that feels like a super derivative clone of tons of other writing. Which is usually what you get when you don't edit your AI-generated text. |
|
|
|
| ▲ | danaw 9 hours ago | parent | prev | next [-] |
| i disagree and even if assisted the points are still valid |
|
| ▲ | madibo3156 8 hours ago | parent | prev | next [-] |
| Comment reads at least partially like human writing, for it is terse and does not try to make a point. |
|
| ▲ | teiferer 7 hours ago | parent | prev | next [-] |
| Really? Do we now suspect everybody who uses the most basic of stylistic elements of producing slop? Pendulums always swing back and forth between extremes but oh boy did this one swing fast into witch hint territory. |
|
| ▲ | lamename 9 hours ago | parent | prev | next [-] |
| Do you disagree with the point made? |
| |
| ▲ | Forgeties79 9 hours ago | parent | next [-] | | If I want a boring, rote LLM answer I will prompt it myself. I don’t read blogs to have a middleman between me and a prompt. | |
| ▲ | rogerrogerr 9 hours ago | parent | prev | next [-] | | LLMs are running a gish gallop at Internet scale. It is not necessary or possible to disprove every sequence of tokens that emerges from one. https://en.wikipedia.org/wiki/Gish_gallop | |
| ▲ | recursive 9 hours ago | parent | prev | next [-] | | Forming a human opinion about slop is like asymmetrical warfare. Or maybe a closer analogy is a Gish Gallop. It can be generated with way less effort than it takes to comprehend it, much less form a coherent opinion on it. | |
| ▲ | jraph 9 hours ago | parent | prev [-] | | It matters whether something is written using an LLM even if we put aside the ethical aspects. Firstly, if your text is deadly boring to read, your point might not get across optimally and one might not just be interesting reading slop. Secondly, you might just been reading the LLM's opinion, and I'm just not interested neither. Thirdly, even if you are just using the LLM as an assistant, we know that your opinion itself may be influenced by the suggestions and since you are still under the impression you are writing yourself (which you are somewhat, not saying), you may internalize the suggestions as your own opinion. There are recent (probably imperfect) studies about this stuff. | | |
| ▲ | lamename 8 hours ago | parent [-] | | I mostly agree with everything you said. Do you feel the same way about code written by an LLM? | | |
| ▲ | jraph 8 hours ago | parent [-] | | I'm fortunate enough not to have been knowingly exposed to LLM generated code of big enough size yet, and haven't run into studies about this (to be fair, I'm not actively looking for them although I'd be quite interested). I imagine the stakes are quite different for code, it's not really opinions. On the topic of boringness I'm afraid I don't have the required experience to know how LLM generated / assisted code feels when reading it. In particular, I have never had the curiosity of going to one of those vibe coded weekend projects' repository and peek at the code. Now that I think about it, maybe I should! Thanks for making me reflect on this. I am concerned that one day I'll run into a PR that superficially looks good but that's badly structured in non immediately obvious ways or that has subtle errors due to the author not knowing well what they are doing. And on the longer term, that a code base with too many such contributions ends up being fragile and difficult to work with. In any case I suppose I'll be looking for places to work where LLMs are not or little tolerated if they keep being a notable thing in the longer term. |
|
|
|
|
| ▲ | xienze 8 hours ago | parent | prev [-] |
| Like clockwork, every single thread about something AI-related has someone expressing their disgust at passages of LLM-written text. In many cases by the same people who are enthusiastically embracing LLM-generated software. Why don't we show the same level of contempt for LLM-authored software as we do for even the slightest hint of LLM-authored text in a blog post? |
| |
| ▲ | stetrain 8 hours ago | parent | next [-] | | We don't like LLMs throwing giant walls of code in PRs at repos and expecting devs to read and respond to all of them. That's kind of similar to written content being posted and linked. There's an expectation that you are asking someone to take time to read it, and with LLMs now the cost to generate things to be read is a lot lower but our attention and capacity to read them remains the same. | | |
| ▲ | anotherevan 7 hours ago | parent | next [-] | | I don’t like humans throwing accusations that something was written by an LLM if they don’t like it. The constant insinuations that us machines are the ones with poor taste is fookin’ tiresome. | |
| ▲ | xienze 8 hours ago | parent | prev [-] | | > We don't like LLMs throwing giant walls of code in PRs at repos and expecting devs to read and respond to all of them. One giant PR versus dozens of smaller ones, what's the difference? LLMs are going to send it your way whether you like it or not. No one is going to argue that usage of LLMs is going to lead to less code that has to be reviewed than normal, are they? It's by design since you're able to produce more code now, remember? > There's an expectation that you are asking someone to take time to read it, and with LLMs now the cost to generate things to be read is a lot lower but our attention and capacity to read them remains the same. I could understand this argument if this had been a 500 word blog post expanded out to 50K words, but it's not. And who's to say the author didn't write most of it and just had an LLM do a little polishing? |
| |
| ▲ | bluebands 8 hours ago | parent | prev | next [-] | | code is not user-facing, functionality, ux, and specs are! there are many apps with ai-generated ideas, specs, and functionality. nobody uses them because of the contempt. in either case, the part that's user-facing is ai-resistant | | |
| ▲ | xienze 8 hours ago | parent [-] | | > code is not user-facing The user interacts with the code, and if it's sloppy AI generated code, it's going to impact the user somehow. Be it through poor performance, bugs, security holes, you name it. Maybe I was naive in thinking the bar was higher than "as long as I can't tell an LLM wrote it that's good enough for me." |
| |
| ▲ | NuclearPM 8 hours ago | parent | prev [-] | | Code is machinery, writing is communication between humans. | | |
| ▲ | xienze 8 hours ago | parent [-] | | Users interface with programs, which are code. And even if you don't think that matters, do instances of "it's not X, it's Y" in a blog post make the text less readable? You could make a compelling argument that many people's prose is greatly enhanced by running it through an LLM, yet unlike in the case of code there's nothing but contempt for that. |
|
|