| ▲ | madrox 18 hours ago |
| I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible. I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow. I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize. And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos. |
|
| ▲ | valicord 18 hours ago | parent | next [-] |
| > I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral. Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset. |
| |
| ▲ | madrox 18 hours ago | parent | next [-] | | > If I'm asking humans, I want to see human responses I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y" Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention. And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were). | | |
| ▲ | Aurornis 14 hours ago | parent | next [-] | | > It shouldn't matter as long as it addresses your ask, yet it does. If the LLM output is concise and efficient I don’t actually care that it’s LLM output. My problem is that much of the LLM prose feels like someone took their half-baked idea and asked the LLM to put a veneer of quality writing on top of it. Then you waste your time reading it to parse out the half-baked idea hiding among the wall of text. | | |
| ▲ | californical 11 hours ago | parent [-] | | Yes exactly If a person has a shitty idea that sounds good, they start writing about it. If they exercise some care in their writing, the act of writing itself is enough to make them realize that their idea is shitty. By the way, it happens to me all the time! Even just on HN, I’ve bailed halfway through writing a comment because I realized that I didn’t know what I was talking about, lol. But an LLM will gladly take that shitty idea and expand it into a very plausible article/message/post, that seems reasonable if you don’t think very critically about it. And it’ll be done with such a high-seeming level of care that any human author would’ve been fact checking themselves the whole time. So it forces the reader to think even more critically, rather than letting our subconscious try to judge authenticity of the writer through the language they use. For example, someone who says “my WiFi is broken” when referring to the fact that their computer is dead, we can quickly judge them as “not an expert at computers”. But if they say that “my M.2 drive has gone bad”, we inherently assume they have some understanding. —- when the first person uses LLMs to write, they sound as informed as the second person even if they are completely clueless and wrong |
| |
| ▲ | eucyclos 12 hours ago | parent | prev | next [-] | | In my case, it's because it doesn't address my ask, which is why I didn't ask an ai in the first place. The only person I know who does sloppypasta is my brother in law. I know he means well, but when I ask his opinion I want the perspective of someone in his demographic. If a generic ai response met my needs, I wouldn't be asking him. | |
| ▲ | taosx 5 hours ago | parent | prev | next [-] | | I think it should matter. When you ask the AI something you are in a frame of mind, you have a specific context, the question also holds value and context that might completely change the parsing of the answer or at least the difficulty of it. What I'm asking and the response from AI through an intermediary lose some context (the prompt), it's like the telephone game where the data becomes more and more distorted, that's why people don't have an issue with their own AI generated answers. Another issue is that when I'm talking with someone and parsing through what they've said I'm considering them, as a person, taking all available context (some of this might happen unconsciously). In any case I don't think there is an easy solution to the problem. | |
| ▲ | heavyset_go 8 hours ago | parent | prev | next [-] | | I'm purposely talking to a person and not a chatbot. So it does not meet the bare minimum of addressing my ask, the premise of the ask hinges on a discussion with a real person. | |
| ▲ | valicord 18 hours ago | parent | prev | next [-] | | > It shouldn't matter as long as it addresses your ask But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot. Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication. | | |
| ▲ | toraway 15 hours ago | parent | next [-] | | As an example of this, I am currently comparing two different models of Android e-readers, from a Chinese brand where the tech specs are all published but there aren't a lot of good comparative reviews. Plus, the specs like battery life are close to the same mAh, but for e-readers especially with Android optimization/drivers/etc make a gigantic difference. So I have been Googling for "Reader X vs Reader Y review"(/comparison/etc) hoping to find Reddit comments or non-spam blog posts from people who actually own both to compare screen and battery life. I found a reddit thread comparing them directly and lo and behold the first comment is someone saying "I own both but honestly you could just ask ChatGPT for this". Fortunately a couple other people responded... When I ask Gemini or ChatGPT, all I get is regurgitation of the tech specs (that are all mostly identical) plus summarized SEO spam reviews (that were probably written by another LLM based on those same tech specs) and it's totally unhelpful. So for this, I absolutely do NOT want an OpenClaw bot to respond as if they've physically used the devices and it would be actively enraging to learn a "helpful" comment "answering" the question was actually just an LLM impersonator. | |
| ▲ | madrox 16 hours ago | parent | prev [-] | | I think it is reasonable, yes, but I don’t think it’s ever been reasonable to expect reasonableness on the internet. We have a difficult enough time showing each other decency. | | |
| ▲ | YurgenJurgensen 15 hours ago | parent [-] | | Then why even have this discussion in the first place? You weren’t expecting any reasonable responses to it, after all. | | |
| ▲ | coldtea 10 hours ago | parent [-] | | Do you only do stuff where you expect the outcome to be good? Perhaps they did it for the off chance of a good response. |
|
|
| |
| ▲ | JumpCrisscross 15 hours ago | parent | prev | next [-] | | > shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y” The people copy-pasting slop almost never excerpt the relevant response. As a result, you get non-concise text you have to triple check. This is functionally useless to the point of being fine to skip. | | |
| ▲ | hombre_fatal 14 hours ago | parent [-] | | Exactly. If you can find the answer for someone with AI, then by all means use it. But at least filter, curate, and verify it into an answer. |
| |
| ▲ | coldtea 10 hours ago | parent | prev | next [-] | | >I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y". Because it's probably not actually about the content but the sense of connection. It's also about the content. Generic slop I can get on demand from an LLM myself, vs a novel insight. | |
| ▲ | MagicMoonlight 5 hours ago | parent | prev | next [-] | | We can tell by your fury that you’re a slop poster. I don’t want a random person’s use of an AI to be slopped at me. I don’t know what they asked it, a lot of the words are made up, and I have to go through the effort of decoding it. If I wanted an AI answer I would ask an AI. AI slop is made up. It’s like handing me a paste of google search results. It’s creating work for me. | |
| ▲ | mpalmer 15 hours ago | parent | prev [-] | | > People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention They are achieving the exact opposite. I don't connect with the person who sends me slop. And they send me content that is a waste of my time and attention, because I have to vet it. Why would I trust someone - how can I ever connect with them - when the only thing I know about them is they take shortcuts? |
| |
| ▲ | falcor84 16 hours ago | parent | prev [-] | | I am really into this approach of AI being used as a user-agent. In particular, I've been thinking a lot about educational content, and what I'd love to ask educational providers for is not AI-generated content, but rather carefully human-built curricula offered in a structured manner, which my own AI could then use to create dynamic content for me. |
|
|
| ▲ | Aurornis 14 hours ago | parent | prev | next [-] |
| > The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow. Reading AI generated prose, even if it’s my prompt, always gives me the same feeling as when I read a LinkedIn post: Like a simple concept was stretched into an unnecessarily long, formulaic format to trick the reader into thinking it was more than it was. Everyone taking their scraps of thoughts and putting them into an LLM likes it because the output agrees with them. It’s flattering. But other people don’t like it because we have to read walls of text to absorb what should have been a couple of their scattered bullet points. Just give me the bullet points. Don’t run it through the LLM expander. That just wastes my time. |
|
| ▲ | bandrami 13 hours ago | parent | prev | next [-] |
| Everybody wants to use LLMs to produce things and absolutely nobody wants to consume the things that LLMs produce and this is the fundamental reason this is all going to collapse unless we find a way for producers to pay consumers to consume their LLM output. |
| |
| ▲ | eucyclos 12 hours ago | parent [-] | | Gotta disagree. I've found several great new YouTube channels that clearly use ai for everything but the script writing. I assume it's an enthusiastic and smart niche expert who lacks the charisma to make videos in addition to doing the research. In very glad ai is filling in those people's weak spots. | | |
| ▲ | grey-area 10 hours ago | parent | next [-] | | How would you know it’s an enthusiastic and smart expert creating the content you’re consuming, do you have the subject matter expertise to judge that? The odds are far higher it’s somebody who knows very little about anything but wants to make money from the gullible. | |
| ▲ | ngetchell 5 hours ago | parent | prev [-] | | How do you know the scripts aren't AI generated? |
|
|
|
| ▲ | hastily3114 10 hours ago | parent | prev | next [-] |
| >I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow. The problem is that getting an AI to answer a question is trivial. If I wanted to know what an AI has to say about the topic, I would just ask myself. Sending AI output has, as the author writes, the same connotation as sending a LMGTFY link. It does not provide me any value at all, I know how to write a question to an AI, just as I know how to use Google. |
|
| ▲ | coldtea 10 hours ago | parent | prev | next [-] |
| >I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. Which is irrelevant. TFA is talking about personal communication (and the examples are from a business setting). And their concern is not the mere quality or lack thereof, but also its origin, and this is something new. >I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow. No, many of us hate "our AI" content too, and wouldn't impose it to other people, same way we wouldn't fling shit at them. |
|
| ▲ | slackbaitnow 15 hours ago | parent | prev | next [-] |
| I am sorry, but in what way is everyone letting the "We've been creating bait content for a long time" comment slide? Did you even read the article? It is about person to person interactions. The three examples weer: * Someone butting in to an ongoing discussion with a solution (but it's generic and misfitting AIslop) * Someone being asked for their expertise and responding (but it's generic and misfitting AIslop) * Someone comes with a problem thesis looking for help (but it's generic and misfitting AIslop) The only one of these that existed prior to AI was the middle one, and the article very specifically calls out how transparent it used to be, because it had the shape of a google link. The first one would be impossible because the person would have to either write an unhelpful response, and they wouldn't find the words at length. You could ignore them or pick it apart easily. The last one would be impossible unless if they were copy pasting from a large PDF, which would look nothing like a chat message. What kind of workplace hellscape do you work on where people posting low effort bait on SLACK was the norm? The premise of this reply is entirely non-sensical. |
|
| ▲ | lich_king 16 hours ago | parent | prev | next [-] |
| I don't think that "it's more of the same" is a good way to think about it. The internet contained a lot of low-quality content, but even low-quality content used to be fairly expensive and time-consuming to produce. Further, you could immediately discern bottom-of-the-barrel content-farmed nonsense by the writing style alone. Now, LLMs make it practically free to generate unlimited amounts of slop that drowns out human-written stuff, and they can imitate the style hints we used to depend on for quick screening. |
| |
| ▲ | madrox 16 hours ago | parent [-] | | Yet how are the alternative ways of thinking about it better? Spending your time angry about what others can do? In any era, that’s a poor life philosophy. The problem is the same as it has always been. Figure out how to use your time and attention effectively, | | |
| ▲ | lich_king 14 hours ago | parent | next [-] | | A sufficient number of people being angry about something is how you end up with social norms. These norms will shape how the technology is used. Conversely, if your take is that there's no point being angry and we should just take it in stride, that just emboldens the producers of slop. | |
| ▲ | beepbooptheory 13 hours ago | parent | prev | next [-] | | Is it possible to be critical without being angry? Are the only options here misplaced ire or total queiescent fatalism? Does the site here even seem excessively angry? | |
| ▲ | SpicyLemonZest 10 hours ago | parent | prev [-] | | Strategic, directed anger is an important component of using your time effectively. It sends a clear signal that certain kinds of behavior are unacceptable and people who'd like continued access to your time had best not engage in them. You shouldn't go around yelling at people every time you get a bit frustrated, but you should and I do express anger when someone signs their name to LLM-generated Slack responses. |
|
|
|
| ▲ | namnnumbr 18 hours ago | parent | prev | next [-] |
| I acknowledge that those likely to copypaste slop aren't likely to find this article themselves, but I built the page to be shared or guide discussions around etiquette like nohello.net or dontasktoask.com. IMO a common understanding of AI etiquette would provide social pressure to halt some of these behaviors. I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated. (the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text) |
| |
| ▲ | lovemenot 17 hours ago | parent | next [-] | | Couple of expressions from pre-AI culture: "RTFM", "Google is your friend". These were well-used because they are directed, pithy, abrasive. (n)amow(?): (not) All my own work ? | | |
| ▲ | username223 14 hours ago | parent [-] | | Good point: RTFM and (wall of slop) are two ways of telling someone that responding to them is not worth your time that are both ruder and more time-consuming than simply saying nothing. Explaining the culture of RTFM, i.e. "if there was any way you could possibly have found the answer otherwise, you should never have asked the question" to non-tech friends usually results in disbelief. But the slop-wall is even worse, as it wastes the questioner's time in figuring out that they're just getting slop. At least RTFM is efficient. |
| |
| ▲ | no-name-here 10 hours ago | parent | prev | next [-] | | Clickable links for URLs mentioned in parent comment: https://nohello.net https://dontasktoask.com | |
| ▲ | madrox 16 hours ago | parent | prev | next [-] | | I think you will find you will get farther by offloading this unpleasantness to an AI and open sourcing it rather than teaching etiquette to the internet, a place not known for its decency. | |
| ▲ | Aeolun 17 hours ago | parent | prev | next [-] | | Yes, I can replace the link to nohello in my automated responses now :) | |
| ▲ | YurgenJurgensen 14 hours ago | parent | prev [-] | | There’s a certain very satisfying force to turning something into a static website that you can point people at. The Internet equivalent of “don’t make me tap the sign”; especially in an era of AI-slop. |
|
|
| ▲ | JumpCrisscross 15 hours ago | parent | prev | next [-] |
| > I don't have a lot of sympathy for people angry at this type of behavior I ignore it. But if that isn’t an option, this sort of writing can help you convince someone in power around you it’s okay to ignore it. |
|
| ▲ | Gigachad 6 hours ago | parent | prev | next [-] |
| >like they're being hoodwinked somehow Because they are. It would be like if I bought some trinket off aliexpress and told you I made it by hand just for you. You wouldn't mind if you bought it yourself, but the fact that I lied about it to make it seem like I care is deceptive and immoral. Sending someone AI generated text without disclosing so is incredibly offensive. It says you don't care about wasting the receivers time and don't care about honesty either. |
|
| ▲ | TonyStr 9 hours ago | parent | prev | next [-] |
| Talking about bait, good job getting 42 responses on hacker news! Your opinions are controversial enough to draw out people who need to correct them, yet genuine enough to not be passed off as a troll and downvoted. |
|
| ▲ | mcphage 18 hours ago | parent | prev | next [-] |
| > We smiled and laughed for years that all of this technology and power is just being used to share cat videos. Well, cat videos make people happy. |
| |
|
| ▲ | waterTanuki 17 hours ago | parent | prev [-] |
| I find your comment disingenuous at best. > The internet was not a bastion of high quality content or discourse pre-AI. I have read thousands upon thousands of pages of AI-related discourse, watched hundreds of videos since 2022, maybe even a thousand now on it. NEVER at any point in time did people opine for the "high quality" internet of before. They opined for the imperfect HUMAN internet of before. We are now seeing once pristine, curated corners of the internet being infected with sloppypasta. This is quite a broad brush to paint the internet with. It's like saying The Earth is not a bastion of warzones/peaceful places to live. That is HIGHLY dependent on location. |
| |
| ▲ | marcus_holmes 15 hours ago | parent | next [-] | | Sorry, not related to your point, but the language: To "opine" is to give an opinion on something. To "pine" for something is to wish for it, usually in a nostalgic sense. I get how the two are related and can be confused, especially when you're talking about comments on the web. Just thought I'd clarify. | |
| ▲ | madrox 15 hours ago | parent | prev [-] | | Even before AI, the human social internet was loaded with bots and disingenuous actors. You want the imperfect human internet that is also pristine and curated. I've been socializing on the internet since 1994, and I feel fairly confident in sharing that this never existed, except in nostalgia. If that's what you're pining for, you're going to have to find a highly protected part of the internet that is walled off from untrusted actors. However, that's always been the solution, and AI doesn't change that. | | |
| ▲ | fzeroracer 8 hours ago | parent [-] | | And since the foundation of the internet, the correct response to bots and disingenuous actors has been to a) ignore them b) ban them and c) ostracize then. We're talking about basic behaviors that have been understood since Usenet, something you surely should be aware of since you grew up in that era. |
|
|