| ▲ | fidotron 10 hours ago |
| The only question is is the entity interesting and/or correct. Those properties are in the eye of the beholder. If they're human or not is beside the point. After all, no one knows I'm a dog. |
|
| ▲ | LeifCarrotson 9 hours ago | parent | next [-] |
| No, those properties are tied to the state of mind and experiences of the human, dog, or LLM behind any given comment. When someone posts: > You could use Redis for that, sure, I've run it and it wasn't as hard as some people seem to fear, but in hindsight I'd prefer some good hardware and a Postgres server: that can scale to several million daily users with your workload, and is much easier to design around at this stage of your site. then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. You can't know whether that's good advice or not without being the author, and if that's posted by someone you trust it has value. An LLM could be prompted to pretend they're an experienced DBA and to comment on a thread, and might produce that sentence, or if the temperature is a little different it might just say that you should start with Redis because then you don't have to redesign your whole business when Postgres won't scale anymore. |
| |
| ▲ | yellowapple 4 hours ago | parent | next [-] | | For all you know that LLM could've indeed actually run an actual Redis, given the increasing use of AI agents for digital infrastructure provisioning. | |
| ▲ | eikenberry 9 hours ago | parent | prev | next [-] | | > then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. This implies they know the author and can trust them. If they don't know the author then there is no trust to break and they are only relying on the collective intelligence which could be reflected by the AI. That is to say that trusting a known human author is very different from trusting any human author and trusting any human author is not that much different from trusting an AI. | |
| ▲ | fidotron 9 hours ago | parent | prev [-] | | > then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. This is my point. There is no sane endgame here that doesn't end up with each user effectively declaring who they do and don't care to hear, and possibly transitively extend that relationship n steps into the graph. For example you might trust all humans vetted by the German government but distrust HN commenters. For now HN and others are free to do as they will (and the current AI situation has been intolerable), however, I suspect in the near future governments will attempt to impose their own version of it on to ever less significant forums, and as a tech community we need to be thinking more clearly about where this goes before we lose all choice in the matter. |
|
|
| ▲ | AlecSchueler 10 hours ago | parent | prev | next [-] |
| > The only question is is the entity interesting and/or correct. This already falls apart though. There are while categories of things which I find "incorrect" and would take up as an argument with a fellow human. But trying to change the mind of an LLM just feels like a waste of my time. |
| |
| ▲ | throwaway2027 9 hours ago | parent | next [-] | | >But trying to change the mind of an LLM just feels like a waste of my time. It often is with humans as well. | | |
| ▲ | AlecSchueler 9 hours ago | parent [-] | | Indeed it is, and there are often times I choose not to engage with my fellow humans. But the exceptions are valuable to me and to others. With an LLM I don't feel there would be any exception, that's the difference. |
| |
| ▲ | yellowapple 4 hours ago | parent | prev | next [-] | | Arguing for the sake of convincing the other person is doomed to inevitable failure, even without the possibility of that person being an LLM. Arguing for the sake of convincing onlookers reading the conversation is more likely to be effective, and in that case it doesn't matter if the other person is an LLM. | |
| ▲ | skeledrew 9 hours ago | parent | prev [-] | | Instead of wanting to change the mind of the other entity, how about focusing on coming to a mutual understanding of what is "correct"? That way it shouldn't matter much if said entity is human, LLM or dog. Unless you're just arguing to push your "correct" on other humans, with little care about their "correct". | | |
| ▲ | AlecSchueler 9 hours ago | parent [-] | | It feels like you've loaded quite a lot on a way that feels unfair: "pushing" and "little care" etc. I maybe should have used a term like "discuss" target than the more loaded "argue." Look, I'll give you a loose example: It's not uncommon to see a post making an "error" I know from experience. I might take the time to help someone more quickly learn what I felt I learnt to help me get out of that mistaken line of thought. If it's an LLM why would I care? There's thousands of other people, even other LLMs, that I could be talking to instead. You've set up a framework here where "mutual understanding" is the end goal but that's just not always what's on the line. |
|
|
|
| ▲ | craftkiller 9 hours ago | parent | prev [-] |
| Not necessarily. Using AI you can trivially perform astroturfing campaigns to influence public perception. That doesn't really fall on the interesting or correctness spectrums. For example, if 90% of the comments online are claiming birds aren't real with a serious tone, you might convince people to fall into that delusion. It becomes "common knowledge" rather than a fringe theory. But if comments reflect reality then only a tiny portion of people have learned the truth about birds, so people will read those claims with more skepticism. (naturally "birds aren't real" is a correct vs not correct thing, but the same can be applied to many less-objective things like the best mechanical keyboard or the morality of a war) |