| ▲ | How much of HN is AI?(lcamtuf.substack.com) | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
| 72 points by surprisetalk 3 hours ago | 35 comments | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | kylecazar 2 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Maybe add a category for posts and comments about AI on HN :) "Stories about AI" is not offensive to me. Its influence on the industry is undeniable and if I'm feeling tired of that content I just won't engage with it. AI-writing is another story, but yeah -- HN is downstream of that problem. You can encourage people not to submit articles that seem to be LLM authored, but it won't work. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | delichon an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I'm afraid that we're in an interregnum. A few years ago AI could not pass a Turing test. A few years from now AI will better at Turing tests than we are. We're now in this strange middle zone where we are dazedly grasping for solutions. But what happens next, when we just fail at the task of recognizing ourselves in cyberspace? Where LatestClaw is just plain better at mimicking you than you are? What happens to the living we used to claw out of the ether for ourselves? Do I need to learn to farm? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | est 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text I tried it against some of my AI generated articles. It says 100% human Turns out if one manually write a structure and a core idea first, nobody think it's AI. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | CharlesW 12 minutes ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions or outright misconceptions. Pot, kettle, black. "Remarkably good" drastically oversells the reliability of it and other AI detectors. It means very little that Pangram did better than other competitors in this snake-oily category in one 2025 benchmark. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | _pdp_ 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
There is no doubt there is a lot of AI generated content. We do it too - code, tutorials, etc. It is just too convenient and useful to ignore. The question that I have is this. Is it possible the language will converge towards AI mannerism when writing - i.e. most people will naturally write like AI because they will pick up on the subtleties of language from ChatGPT, Claude, etc? In other words there is an exposure effect at play. I just found out about Communication Accommodation Theory (CAT) which makes me think that the answer is probably "yes". | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | rob an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Time to switch to a $10 one-time fee like Something Awful Forums. No crypto. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | ljhsiung 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
One of many things that bums me out about AI is whether content I create will be truly appreciated by humans, or will just be fed back into the algorithm. I often wonder how exactly you'd mitigate this. Further, as a user, I wonder what incentive there is for me to write anything at all online, let alone commenting on forums, if it will just be fed back into an LLM. Is paywalling or forcing user accounts the solution? That feels antithetical to the reason for the internet at all. Just musings. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | webprofusion an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
For a HN front page article this is light on content. Should have used AI. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | senectus1 an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I'm more interested in how much of the comments are AI | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | marysminefnuf 44 minutes ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I think we should allow users to add a set of like 5 tags personally on our account to content. And we can see what people are also tagging stuff as at large. So if a blog thats written with ai is something you want to ignore you can just tag that url and it wont show and you can see what people tagged that blog as too. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | nunez 35 minutes ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
HN cargo-cults heavily for sure. That's more of a reflection of SV culture than something unique to HN. 2016-2018 was Docker and Kubernetes. 2020 was COVID. 2021-2022 was WFH good, RTO bad...and lots of Web3 and crypto stuff. 2023 was the dawn of AI, and it hasn't let up since. These are vibes and likely inaccurate. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | deepsquirrelnet 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions Turing test is really in the rearview, huh? Humans need machines to detect if a machine wrote the text, because humans aren’t sure. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | halfcat an hour ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
That’s a great question and a very realistic thing for us to answer. There is definitely no increase in AI here. If you’d like, I can walk you through how the best posters arrive at this conclusion in the normal human way. Just say the word. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | marysminefnuf 2 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Too much | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | cj 2 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I haven't really noticed. Doesn't seem like HN has changed very much. Edit: Clearly the topics have evolved over time (AI, crypto, there will always be some topic taking up the majority of attention), but the type and worthiness of content seems unchanged. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||