| ▲ | tzs 7 hours ago |
| How about comments that include AI output if labeled? Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for). I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done. I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one. I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary. Would that be OK or would that count as an AI written comment? I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of: 1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]). 2. Use too many commas. 3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too. I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for. [1] https://news.ycombinator.com/item?id=46867167 |
|
| ▲ | altairprime 7 hours ago | parent | next [-] |
| You were correct not to post the summary. HN tends to expect readers to invest time in reading and understanding long form content and for community to step into discussions and offer context and explanations when necessary. One of the most important context statements on this site has been “in mice”, posted as a two word comment, elevated to top comment on the post. An AI summary will miss that context altogether while busily calculating a cliffsnote no one wants to read (and could often get you flagged and potentially banned, even before today’s guideline update). If a reader wants an AI summary, they have the same tools you do to generate it by their own hand. If you have domain familiarity with it, have some personal insight to offer a lens through, or care about the topic deeply enough to write a summary yourself, then go ahead! I almost never post about AI given my loathing of generative ML, but I posted a critical summary in a recent “underlying shared structure” post because it was a truly exciting mathematical insight and the paper made that difficult to see for some people. Please don’t use AI to reduce the distinctiveness of your writing style. Run on sentences are how humans speak to each other. Excess commas are only excess when you consider neurotypicals. I’m learning French and I have already started to fuck up some English spelling because of it. None of that matters in the grand scheme of things. Just add -er suffix checks to your mental proofreading list and move on with being you. |
| |
| ▲ | ASalazarMX 7 hours ago | parent [-] | | I've done research using AI, it does work better than a search engine (when it doesn't hallucinate); but I find copy-pasting verbatim distasteful, and disrespectful of the time of others. What I do is copy the URLs for reference, and summarize the issue myself in as few sentences as possible. Anyone who wants to learn more can follow the reference. | | |
| ▲ | altairprime 6 hours ago | parent [-] | | That’s fine, then! A summary handcrafted for HN is of course fine, though you might find more value in citing what you consider most distinctive about it as a higher priority than a summary if not different than its own opening paragraph / abstract / etc. |
|
|
|
| ▲ | topaz0 7 hours ago | parent | prev | next [-] |
| It sounds like you already know how to improve your comments, how about just doing those things. |
| |
| ▲ | tzs 6 hours ago | parent | next [-] | | Well, I keep missing the "serve"/"server" thing because spell checkers think "server" is a real word so don't flag it. :-) | | |
| ▲ | Hnrobert42 4 hours ago | parent [-] | | Getting that wrong is a small price to pay. Plus, people know what you mean. |
| |
| ▲ | raincole 7 hours ago | parent | prev [-] | | Too much effort, bruh. | | |
| ▲ | verdverm 7 hours ago | parent [-] | | Capitalization is apparently too much effort for some now. Who would have thought the Ai would make us so lazy so quickly? Who cares about people with reading disabilities, let's shift burden onto the reader. My time is better spent managing my Ais. | | |
| ▲ | ASalazarMX 6 hours ago | parent | next [-] | | This started years before LLMs, as a way of signaling unconventional thinking. Maybe influenced by the UX of instant messaging. | | |
| ▲ | verdverm 6 hours ago | parent [-] | | That's my general understanding too. More recently people have adopted it as a way to not look like Ai, I've had several cite that as their rationale. There has been a notable uptick since the Ai step function change at the end of last year, along with all the other patterns we see, such as the one that underlies this new HN rule. |
| |
| ▲ | charcircuit 7 hours ago | parent | prev [-] | | >onto the reader Or the reader's AI who is able to format or translate the text to make it easier to read for the reader. | | |
| ▲ | verdverm 6 hours ago | parent [-] | | I shouldn't have to burn tokens to read. Most input boxes and editors will handle the capitalization for you during auto-correct. It seems like people go out of their way to drop the caps. | | |
| ▲ | duskdozer 2 hours ago | parent [-] | | On mobile, maybe? I haven't had anything like that on any PC I've worked on. |
|
|
|
|
|
|
| ▲ | notatoad 4 hours ago | parent | prev | next [-] |
| Before chatbots, people used to link to Google search result pages as a passive-agressive way to say “the information is out there, go find it, I don’t care about you enough to explain it to you” Pasting a chatGPT response into a comment, and labeling it as such, feels the same to me. It is more, not less, insulting than trying to pass an AI response off as your own. |
|
| ▲ | nunez 6 hours ago | parent | prev | next [-] |
| I'd be fine with treating this like snippets from Wikipedia with citations back to the article. This way, people can manually verify the sources if they so choose. |
|
| ▲ | computomatic 7 hours ago | parent | prev | next [-] |
| > I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary. > Would that be OK or would that count as an AI written comment? The rule seems written to answer this directly. Absolutely nobody cares what Perplexity has to say about the case - summary or otherwise. If you mention what the case is, I can ask claude myself if I’m interested. Better yet, post a link to an authoritative source on the case (helpful but not required). At minimum, verify your info via another source. The community deserves that much at least. An AI-generated summary adds nothing positive and actually detracts from the conversation. |
| |
| ▲ | tzs 6 hours ago | parent [-] | | I did post a link to the Supreme Court's decision at Cornell Law School's Legal Information Institute's archive of Supreme Court decisions. I looked at the decision itself sufficiently to see that it was the case I remembered and that my recollection of the facts and the decision was correct. I just didn't include a summary because I didn't find a good one I could link to. Normally I'd write a brief one myself but I found that hard to do when Perplexity's summary was sitting right there in the next window and it was embarrassingly better than what I would have written. |
|
|
| ▲ | rzmmm 7 hours ago | parent | prev | next [-] |
| Perplexity supports sharing URL to the thread. I think it's quite natural to link AI summaries like that. |
| |
| ▲ | davorak 7 hours ago | parent | next [-] | | I do not want to see posts to AI summaries with the AIs the way they are now. None I have used so far can cite sources correctly or verify its information. If the poster is not doing that verification then it is pushing that work on to the readers. If the poster did do the verifications than posting that verification is better than the ai summary. | |
| ▲ | lossyalgo 6 hours ago | parent | prev | next [-] | | How long do those links exist though? Until the author deletes it? | |
| ▲ | ASalazarMX 6 hours ago | parent | prev [-] | | > I think it's quite natural to link AI summaries like that. I think you misspelled "convenient". More than the small effort that it takes one to share generated text, one has to consider the effort of who knows how many humans that will use their time to read it. If a LLM wrote something you don't know about, you're not qualified to judge how accurate it is, don't post it. If you do know the subject, you could summarize it more succinctly so you can save your readers many man hours. If LLMs evolve to the point where they don't hallucinate, lie, or write verbosely, they will likely be more welcome. |
|
|
| ▲ | bsimpson 7 hours ago | parent | prev | next [-] |
| This is how I would use/expect AI to be used in HN. I would also like this clarified. |
| |
| ▲ | altairprime 7 hours ago | parent [-] | | AI-edited comments are not welcome here. If you’re not able to see and make those changes in your HN writing without AI editing, then you’ll either have to post on HN without those changes, or you’ll have to strive to apply them yourself. | | |
| ▲ | bsimpson 5 hours ago | parent [-] | | This sounds like you're chastising me for something totally distinct from what I was supporting the request for clarity on. I'm not asking or advocating for using AI as a copy editor. The post I replied to asked about using Gemini as if it's Wikipedia - that is, saying "according to Gemini" when citing a fact where one might have once wrote "according to Wikipedia" or even "according to Google." This is a forum people hang out in part-time. It's nobody's job to go spend an hour researching primary sources to post a comment. Shallow searches and citations are common and often helpful in pointing someone in the right direction. As AI becomes commonplace, a lot of that is being done with AI. "Can I have AI write a reply for me?" is a very different question than "Can I cite an AI search result?" This rule change is clear about the former. There's room to clarify the latter. | | |
| ▲ | duskdozer 2 hours ago | parent | next [-] | | I don't see how an AI response would have any value. If you aren't familiar enough with the material to make a statement yourself, you aren't familiar enough to validate the response. If you use it as a pointer to verifiable sources, you should instead post the sources themselves and why you think they're relevant. | |
| ▲ | altairprime 4 hours ago | parent | prev [-] | | > This sounds like you're chastising me Nope. (For an example of that, see any comment I posted to this discussion that starts with “Please don’t”.) > "Can I cite an AI search result?" Ah. An AI response is neither a primary source nor a reference source, and HN tends to strongly prefer those. Linking to a Google /search?q= isn’t any more welcome here than linking to an AI /search?q=; neither are stable over time and may vary wildly based on algorithmic changes. Wikipedia, as a curated reference source, is not classifiable as equivalent to either a search engine or an AI response at this time, and evidences much stronger stability, striving towards that of a classical print encyclopedia (but never reaching it). Perhaps someday Britannica will release an AI that only provides fully factual replies that are derived in whole from the Britannica encyclopedia, but as of today, AI has not demonstrated the general veracity and reliability that even Wikipedia, the very worst of possible reference sources, has met over the years. (Note that an Ask-A-Librarian response would be more credible than a Wikipedia page and much more credible than today’s AI attempts to replace that function; but linking such a response would still be quite problematic, not the least of which because the primary value of that response is either directly quotable and/or is citations that should be incorporated into the post itself. But if that veracity differential changes someday once the AI hallucination problem is solved at the underlying level rather than in post-filters, I’m happy to revise my position.) |
|
|
|
|
| ▲ | verdverm 7 hours ago | parent | prev [-] |
| I would still say no, there is something about finding the words for yourself, even if they aren't as elegant as an Ai can make. It's fine, most humans prefer imperfection. The point is we don't want to read Ai summaries, we can make one ourselves if we want. Personally, with certainty, I don't want to read one from Perplexity on the basis that they do the Ai for Trump Social. (reverse-kyc if you are not aware) For some inspiration on why this is meaningful: https://www.npr.org/2025/07/18/g-s1177-78041/what-to-do-when... |
| |
| ▲ | tzs 6 hours ago | parent [-] | | > I would still say no, there is something about finding the words for yourself, even if they aren't as elegant as an Ai can make. It's fine, most humans prefer imperfection. In this instance the only reason I considered using the AI summary was that there was no Wikipedia article about the case (which surprised me as it is one of the foundational cases in Commerce Clause law...although maybe all the points in it are covered in later cases that do get their own Wikipedia articles?). Normally I'd just copy Wikipedia's summary into my comment and link to Wikipedia and to the decision itself for people that want the details. > The point is we don't want to read Ai summaries, we can make one ourselves if we want. How would you know if you wanted one? Someone mentioned they would like to see a case on this subject but they didn't think it would ever happen. I knew of a case on the subject, found the reference, and posted the link. At that point we are already on a tangent from what most of the thread is about and from what most people reading it care about. The point of the summary would be to let you know if the case might actually be relevant to anything you cared about in the thread. (The answer would probably be "no" for 95+% of the people reading the comment). | | |
|