Remix.run Logo
ninjagoo 3 hours ago

Lot of folks on here saying they only want to converse with other humans, for various reasons.

But here's the funny thing. I'm pretty sure the frontier models are now smarter than I am, more eloquent, and definitely more knowledgeable, especially the paid versions with built-in search/research capability. I'm also fairly certain that the number of original thoughts in a given discourse on the Internet is fairly small, I know that's certainly the case for me.

So whither humans now?

If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?

tadfisher 3 hours ago | parent | next [-]

Nothing is stopping you from pasting an HN link into your chatbot of choice for an "informed" discussion.

The rest of us want the benefit of lived experience and genuine curiosity in discussions. LLMs are fundamentally incapable of both.

tredre3 an hour ago | parent | prev | next [-]

> If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?

Good news then, you're currently on a forum! So we all agree that humans > AI, regardless of your thought on the intelligence behind it.

caditinpiscinam 3 hours ago | parent | prev | next [-]

This reminds me of conversations around plagiarism that come up when working with students: that question of "this other person expressed this idea better than I can, why can't I just use their writing"?

Because I want to know what you think, because putting our thoughts into words and sharing them is an important part of thinking, because we'll lose these skills if we don't use them, because in thinking for yourself you might come up with something interesting that nobody has ever thought before.

Of course, writers are allowed to reference and use other peoples writing: with proper attribution. I don't have a problem with people sharing quality AI generated content when it's labelled as such. The issue is that most people writing AI comments don't do this, which is itself probably the strongest indictment of the practice.

brailsafe 3 hours ago | parent | prev [-]

Would you hang out with a friend over coffee or something who, rather than conversing with you, recorded your side of the conversation directly into an LLM and then played you back the result? Seems like a good way to kill a relationship.

ninjagoo 2 hours ago | parent [-]

A significant part of my friends and family conversations already involve referencing LLMs for scoping, explanations, deeper dives, insights etc. And it's not just me, they use LLMs more than I do. It helps move discussions along. Where before conversation would get bogged down in disputes, now we cover more ground.

If it helps, my friends and family tend to have at least a master's, and the majority have PhDs.

> Would you hang out with a friend over coffee or something who, rather than conversing with you, recorded your side of the conversation directly into an LLM and then played you back the result?

I think the difference is that you're imagining the LLM replaces the conversationalist, but as I said above, my lived experience is that the LLM provides grounding to the discussion, effectively having replaced internet search as a better, faster, broader, smarter library. It doesn't kill the conversation, it makes it better.