| ▲ | dataflow 6 hours ago |
| Do the guidelines also disallow comments along the lines of "according to <AI>, <blah>"? (I ask this given that "according to a Google search, <blah>" is allowed, AFAIK.) |
|
| ▲ | BeetleB 5 hours ago | parent | next [-] |
| I would lean towards disallowing those. With "According to a Google search ...", someone can ask for specific links (and indeed, people often say to link to those sources to begin with instead of invoking Google). With "According to AI ... " - why would most readers care what the AI thinks? It's not a reliable source! You might as well say "According to a stranger I just met and don't know ..." If you're going to say that the AI said X, Y, Z, provide a rationale on why it is relevant. If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI. |
| |
| ▲ | dataflow 3 hours ago | parent [-] | | For reference, the point here isn't to say "what AI thinks", but what you found with the help of AI. The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't. Sometimes I could spend half an hour verifying them independently, sometimes I can't do that but they still seem worthwhile. > If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI. I think you're seeing this as too black-and-white, and missing the heart of the issue. The purpose of mentioning AI is to convey the level of (un)certainty as accurately as possible. The most accurate way to do that would often be to mention any use of AI, rather than hiding it. If AI tells me that it believes X is true because of links A and B that it cites, and I find those links compelling, then I absolutely want to mention that AI gave me those links because I have no clue whether the model had any reason to bias itself toward those sources, or whether alternate links may have existed that stated otherwise. Whereas if a normal web search just gives links that mention terms from my query, then I get a chance to see the other links too, and I end up being the one who actually compare the contents of the different pages and figure out which one is most convincing. Depending on various factors, such as the nature of the question and the level of background knowledge I have on the topic myself, one of these can provide a more useful response than the other -- but only if I convey the uncertainty around it accurately. | | |
| ▲ | BeetleB 2 hours ago | parent [-] | | > The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't. In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions. This is true not just from the chat, but for Google AI summaries. When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar? (If you look at my other comments, I'm actually in favor of using LLMs in some capacity for HN comments. Just not in this case.) | | |
| ▲ | dataflow 2 hours ago | parent [-] | | >> actually does cite sources that I feel appear plausible. > In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions. Note that those are specifically not the cases where the AI is citing "sources that I feel appear plausible." (I also don't find over 50% hallucination to be accurate for Google AI summaries in my experience, but that depends on your queries, and in any case, I digress...) > When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar? To be clear, I do understand both sides of the argument, and I don't think either side is unreasonable. I've also had the experience of being on both sides of this myself, and I don't think there's a clear-cut answer. I'm just hoping to get clarity on what the new policy is as far as this goes. I'm sure it'll be reevaluated either way as time goes on. |
|
|
|
|
| ▲ | MetaWhirledPeas 4 hours ago | parent | prev | next [-] |
| I don't have a problem with that. First off it's not very common. Second off it can add to a conversation, just as it can with in-person discussions. If you feel like it doesn't, don't upvote and don't reply. There's no value in pretending we're Woodward and Bernstein every time we leave a comment. |
|
| ▲ | dang 24 minutes ago | parent | prev | next [-] |
| We don't want people copy-pasting in comments generally. Summary comments, onlyquote comments (i.e. consisting of a quote and nothing else), duplicate comments are other examples of this. It's not specific to LLMs. However, that's probably not critical enough to formally add to the explicit guidelines, so it's probably fine to leave it in the "case law" realm—especially because downvoters tend to go after such comments. |
|
| ▲ | yellowapple 5 hours ago | parent | prev | next [-] |
| I think those should be allowed iff the nature of being AI-generated is relevant to the topic of discussion — e.g. if we're talking about whether some model or other can accurately respond to some prompt and people feel inclined to try it themselves. |
| |
| ▲ | lossyalgo 5 hours ago | parent [-] | | I constantly read those comments and I personally have conflicting opinion with them. On one hand, it's interesting to compare what is coming out of models, but on the other hand, LLMs are all non-deterministic, so results will be fairly random. On top of that, everybody has a different "skill" level when prompting. In addition, models are constantly changing, therefore "I asked chatGPT and it said..." means nothing when there is a new version every few months, not to mention you can often pick one of 10+ flavors from every provider, and even those are not guaranteed to not be changed under the hood to some degree over time. |
|
|
| ▲ | crossroadsguy 4 hours ago | parent | prev | next [-] |
| I'd rather ask AI to provide a source and then cite the source. But if the source itself is AI backed, then it's a bit different :) |
| |
| ▲ | dataflow 3 hours ago | parent [-] | | I explained this in a bit more depth in an adjacent reply (feel free to take a look) but obtaining the source from AI doesn't achieve the same thing. For example, there might be other links that contradict that source, which the AI wouldn't cite. Knowing that AI picked the "best" one vs. a human is incredibly relevant when assigning and weighing credibility. |
|
|
| ▲ | snowwrestler 5 hours ago | parent | prev | next [-] |
| Citations can be helpful. But AI summaries and Google searches are poor citations because they are not primary sources. |
|
| ▲ | dfxm12 4 hours ago | parent | prev [-] |
| AI is not a source. A Google search result page is not a source. Hopefully, these things help you find a source. If you're posting something you feel the need to source, post the source along with your comment! For example, don't say "according to a Google search, x"... say something like "according to Microsoft's documentation, x" and provide a link to Microsoft Learn page... |