| ▲ | wvenable 11 hours ago |
| I don't think the real issue is LLM posts. The issue with low quality on the Internet has always been quantity. The problem always has been humans who post too much, humans that use software to post too much, and now it's humans who use LLMs to post too much. The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this. Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem. |
|
| ▲ | bigstrat2003 9 hours ago | parent | next [-] |
| > Someone using an LLM is craft a reply is not a problem on it's own. No, someone using an LLM to craft a reply is a problem in its own. I want to hear what a human has to say, not a human filtered through a computer program. No grammar editing, nothing. Give me your actual writing or I'm not interested. |
| |
| ▲ | wvenable 9 hours ago | parent [-] | | Do you though? Like what real difference does it make to you? Can you even tell if this has been passed through an LLM or not? If you can't tell, why does it matter? I don't want to be robo-slopped at en masse or be fed complete fabrications but neither of those actually require an LLM. If you're going to use an LLM to gather your thoughts, I don't see a problem with that. | | |
| ▲ | Barrin92 8 hours ago | parent [-] | | >Like what real difference does it make to you? the difference is that you get to see the unfiltered, unique perspective of a real human being. Just like I don't want to talk to anyone through an instagram or tiktok beauty filter or accent remover. If your thoughts are unordered, it's okay I'll take your unordered thoughts over some smoothed over crap. Do people have really such a low opinion of themselves that they have to push every single thing through some kind of layer of artifice? | | |
| ▲ | wvenable 8 hours ago | parent [-] | | > the difference is that you get to see the unfiltered, unique perspective of a real human being. The implicit unfounded assumption is whether that's actually worth more than a well written orderly response. Most comments are kind of crap. Not everyone is good at writing. In some cases, it might even be a disability aid. And if their comments aren't good, we have a system in place to rank them accordingly. Again, I think the only problem is quantity. If we're overrun with low-effort posts, no amount of ranking will help that. | | |
| ▲ | munificent 6 hours ago | parent [-] | | > The implicit unfounded assumption is whether that's actually worth more than a well written orderly response. It's not implicit or unfounded. The parent comment is explicitly saying that's what they prefer. And, as an actual human, their preference is intrinsically valid for them. If I like my kid's crappy cooking over a Michelin-star meal made by a robot... then I get to like my kid's crappy cooking more. I have that right. There is no social consensus when it comes to what I want. You can't argue whether my preference is correct or not, it's my preference. | | |
| ▲ | wvenable 2 hours ago | parent [-] | | As a software developer and human being, I know people often say they prefer one thing while actually preferring something else. That's human nature. People have strong feelings about AI in general and that can definitely cloud what they will say about it. Everybody hates AI but, like CGI in movies, they only likely hate the AI or CGI that they notice. |
|
|
|
|
|
|
| ▲ | ffsm8 11 hours ago | parent | prev | next [-] |
| If you had the LLM write the comment, then it wasn't your thoughts. I sometimes wonder if people aren't forgetting why we're on this platform. The goal is to have an interesting discourse and maybe grow as a human by broadening your horizon. The likelihood of that happening with llms talking for you is basically nil, hence... Why even go through the motion at that point? It's not like you get anything for upvotes on HN |
| |
| ▲ | wvenable 10 hours ago | parent [-] | | > If you had the LLM write the comment, then it wasn't your thoughts. But what if I provided the LLM my thoughts? That's actually how I use LLMs in my life -- I provide it with my thoughts and it generates things from those thoughts. Now if I'm just giving it your comment and asking it to reply, then yes, those aren't my thoughts. Why would I do that? I think the answer goes back to my original point. If I'm telling you my thoughts and then you go and tell a friend those thoughts, would you say those are still my thoughts even though I wasn't the one expressing them directly to your friend? | | |
| ▲ | meatmanek 9 hours ago | parent [-] | | I like to think about it in terms of output-to-prompt ratio. For HN comments, I think an output ratio of 1 or less is _probably_ fine. Examples: - translating (relatively) literally from one language to another would be ~1:1.
- automatic spelling/grammar correction is ~1:1
- Using an LLM to help you find a concise way of expressing what you mean, i.e. giving it extra content to help it suggest a way of phrasing something that has the connotation you want, would be <1:1
Expansion (output > prompt) is where it gets problematic, at least for HN comments: if you give it an 8 word prompt and it expands it to 50, you've just wasted the reader's time -- they could've read the prompt and gotten the same information.(expansion is perfectly fine in a coding context -- it often takes way fewer words to express what you want the program to do than the generated code will contain.) | | |
| ▲ | wvenable 9 hours ago | parent [-] | | I think all your examples are all perfectly fine. As for expansion, that might just be the risk we take. I been downvoted on reddit for being "too verbose" in my replies and I'm a human. And perhaps just reading the prompt in that case wouldn't give you more information; the LLM might actually have some insight that is relevant to the conversation. What's the difference between that and googling for something and pasting it in? | | |
| ▲ | ffsm8 an hour ago | parent [-] | | The linked rule does not make such a distinction, and I don't see how this rule could be enforced with such a caveat, either. Hence no, none of these examples should be okay. Even if pure translation and grammar check is gonna be effectively impossible to detect too, so likely pointless to talk about And the last one is often detectable and very clearly against it - I'm not sure how you can come to any other conclusion | | |
| ▲ | wvenable an hour ago | parent [-] | | > I don't see how this rule could be enforced with such a caveat I don't see how this rule is going to be enforced anyway. Many people posting with AI help won't get noticed at all and about 100 times a many people are going to be accused of using AI because they use proper grammar. |
|
|
|
|
|
|
| ▲ | malfist 11 hours ago | parent | prev [-] |
| Amusingly your comment carries some of the tropes of AI authorship ("is not a problem on it's own....is the problem") but it's not shaped like a profound insight is being discovered in every line is what makes it human. How much of AI writing will pass under the radar when the big companies aren't all maximizing to generate the most engagement hacking content in a chatbot UI? Maybe it'll still stand out for being low quality, but I'm not sure. There's lots of low quality human authored content. Not sure where my comment is going, I just kinda rambled. |
| |
| ▲ | wvenable 10 hours ago | parent [-] | | > Amusingly your comment carries some of the tropes of AI authorship It was trained on 30 years of my posts on the Internet, I'm sure some part of it sounds just like me. |
|