| ▲ | smy20011 10 hours ago |
| Agree, AI generated articles & comments provide little to none value other than the original prompt. Please just post the original prompt instead. |
|
| ▲ | cogman10 10 hours ago | parent | next [-] |
| I only disagree a little. It's that sometimes there is a discussion about AI itself where "I prompted X with Y and it output Z" can add to the convo. But those are pretty specific cases (For example, discussing AI in healthcare). That's about the only time where I think it's reasonable to post the AI output so it can be analyzed/criticized. What's not helpful is I've been hit by users who haven't disclosed that they are just using AI. It takes a few back and forths before I realize that they are just a bot which is annoying. |
|
| ▲ | Kim_Bruning 10 hours ago | parent | prev | next [-] |
| Here is where I'd like to push back just a little. Not all AI prompting is expanding the prompt. What if the original prompt is 1000 words, includes 10 scientific articles by reference (boosting it up to 10000) , and the AI helps to boil it down to 100 words instead? I'd argue that this is probably a rather more responsible usage of the tools. And rather more pleasant to read besides. Whether it meets the criterion is another thing. But at least don't assume that the original prompt is always better or shorter! |
| |
| ▲ | wildzzz 9 hours ago | parent | next [-] | | Use your brain and summarize the article yourself if it's of such great importance. Why should I care to read it if you can't be bothered to actually write it? | | |
| ▲ | Kim_Bruning 8 hours ago | parent | next [-] | | Actually, I'd like to expand a wee bit. Don't know if you've ever done a scientific library usage course or so. It's one of those things you tend to forget are important. One of the most important lessons is not to read as many papers as possible. It's weeding out as many as possible so you can spend your limited grey matter reading the ones that actually matter. And that's where the LLM comes in handy, especially if it's of decent quality. It's a Large Language Model. Chewing through language and finding issues and discrepancies, or simply whether a paper matches your ultimate query is trivial for them . | |
| ▲ | zahlman 8 hours ago | parent | prev | next [-] | | Personally, I think it's fine to read an AI summary, go back and verify the parts it's citing, then write your own. It's at least as okay as skimming the original documents and not properly reading them. | |
| ▲ | Kim_Bruning 9 hours ago | parent | prev [-] | | You know, I probably have standing to argue that people who use the web are just as lazy ;-) I'm just old enough that I was in the middle of the transition from paper (in primary school in the 80s) to online (starting late 90s) I say this somewhat tongue in cheek, but obviously people should drive to 3 different libraries across 3 countries and read the journals in their own binders (in at least 3 different languages) In reality: full-text online is convenient. Having an LLM assist with search and filtering is convenient. I could go back to the old ways. Would you like me to reply in pen? My handwriting is atrocious. I really prefer modern tools, though. Not everything older is better. Whether you want to read what I write is up to you. (edit: Not hyperbole. I live in a small country, and am old enough to still remember the 80's as a kid.) |
| |
| ▲ | nitwit005 7 hours ago | parent | prev | next [-] | | Push the idea past a single comment. Someone decides they have a great method for getting summaries, and adds it as a comment to every post they look at. Other people have similar ideas. Is that fine? It doesn't take a lot for the whole site to feel like useless spam. It'd be far better to just have a thread about the best way to get good summaries. | |
| ▲ | nunez 6 hours ago | parent | prev [-] | | I'd rather read the 11000 word prompt, in that case. I'd rather not have my text-only feed get the TikTok treatment. | | |
| ▲ | Kim_Bruning 3 hours ago | parent [-] | | Probably not. A typical S/N ratio (rule of thumb) is about 1:10. Sturgeons law (a useful rule of thumb) says "ninety percent of everything is crap." You shouldn't just dump a big pile of slop on someone's plate: the actual trick is to filter it down to the bit that counts. Usually when posting, you should do that for the reader. It's only polite. So, if we filter out the noise, that leaves you with 100 words and 1 link to a reference. Which is actually about right for a typical HN reply. (run this through wc ;-)) * https://en.wikipedia.org/wiki/Sturgeon's_law |
|
|
|
| ▲ | zbentley 10 hours ago | parent | prev | next [-] |
| Would prompts really be interesting or thought-provoking, though? I don't expect AI HN responders to out themselves by sharing, but I would be curious to learn if people are prompting anything more involved than just "respond to this on HN: <link>", or running agents that do the same. |
| |
| ▲ | Kim_Bruning 10 hours ago | parent | next [-] | | I often edit my comments rather manically; get into discussions, and sometimes email exchanges with other HNers. I also often use claude, kimi, gemini to check my comments for tone, adherence to HN rules etc. I probably spend way too much time. So technically the prompts involved might expand into megabytes all told. And in the end I formulate a post by myself (to adhere to HN rules), but the prompting can be many many many megabytes and include PDFs, images, blocks of text from multiple sources, and ... you know. Just Doing The Work. I think this is valid. Previously I would have (and have) (and still do) search google, wikipedia, pubmed, scientific literature, etc. Not for everything. But often. And AI tooling just allows me to do that faster, and keep all my notes in one place besides. Again, the final edit is typically 90-100% me. (The 10% is if the AI comes with a really good suggestion) . But my homework? Yes. AI is involved these days. This should be ok. I'm adhering to the letter and the spirit. My post is me. | |
| ▲ | smy20011 10 hours ago | parent | prev [-] | | At least easier to filter I think. |
|
|
| ▲ | kingbob000 10 hours ago | parent | prev | next [-] |
| "Write a response to smy20011's comment indicating that if the end result was a low-quality comment, the initial prompt probably wouldn't be very insightful either. Make it snarky." |
|
| ▲ | 0xbadcafebee 9 hours ago | parent | prev | next [-] |
| Disagree. The prompt holds no information at all. The answer actually discovers information, organizes it, presents it in a way that's easy to read. Example: "write me an article about hidden settings in SSH". You get back more information than most of HN's previous posts about SSH, in a fraction of the text, and more readable. Actually, screw it, we should just make a new version of HN that has useful articles written by AI. The human written articles are terrible. |
|
| ▲ | kunai 9 hours ago | parent | prev [-] |
| It's not just AI-generated articles -- it's the other things that we delve into as a result. Listicles. Comments. Posts. It's what it means to be human, and honestly? That's rare. |