| ▲ | LeifCarrotson 9 hours ago | |
No, those properties are tied to the state of mind and experiences of the human, dog, or LLM behind any given comment. When someone posts: > You could use Redis for that, sure, I've run it and it wasn't as hard as some people seem to fear, but in hindsight I'd prefer some good hardware and a Postgres server: that can scale to several million daily users with your workload, and is much easier to design around at this stage of your site. then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. You can't know whether that's good advice or not without being the author, and if that's posted by someone you trust it has value. An LLM could be prompted to pretend they're an experienced DBA and to comment on a thread, and might produce that sentence, or if the temperature is a little different it might just say that you should start with Redis because then you don't have to redesign your whole business when Postgres won't scale anymore. | ||
| ▲ | yellowapple 4 hours ago | parent | next [-] | |
For all you know that LLM could've indeed actually run an actual Redis, given the increasing use of AI agents for digital infrastructure provisioning. | ||
| ▲ | eikenberry 9 hours ago | parent | prev | next [-] | |
> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. This implies they know the author and can trust them. If they don't know the author then there is no trust to break and they are only relying on the collective intelligence which could be reflected by the AI. That is to say that trusting a known human author is very different from trusting any human author and trusting any human author is not that much different from trusting an AI. | ||
| ▲ | fidotron 9 hours ago | parent | prev [-] | |
> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. This is my point. There is no sane endgame here that doesn't end up with each user effectively declaring who they do and don't care to hear, and possibly transitively extend that relationship n steps into the graph. For example you might trust all humans vetted by the German government but distrust HN commenters. For now HN and others are free to do as they will (and the current AI situation has been intolerable), however, I suspect in the near future governments will attempt to impose their own version of it on to ever less significant forums, and as a tech community we need to be thinking more clearly about where this goes before we lose all choice in the matter. | ||