Remix.run Logo
alontorres 5 hours ago

I think that this requires some nuance. Was the post generated with a simple short prompt that contributed little? Sure, it's probably slop.

But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.

yabones 5 hours ago | parent | next [-]

I don't see what value the LLM would add - writing itself isn't that hard. Thinking is hard, and outsourcing that to an LLM is what people dislike.

alontorres 4 hours ago | parent | next [-]

I'd push back a bit on "writing itself isn't that hard." Clear writing is difficult, and many people with good ideas struggle to communicate them effectively. An LLM can help bridge that gap.

I do agree with your core point - the thinking is what matters. Where I've found LLMs most useful in my own writing is as a thinking tool, not a writing tool.

Using them to challenge my assumptions, point out gaps in my argument, or steelman the opposing view. The final prose is mine, but the thinking got sharper through the process.

Zambyte 5 hours ago | parent | prev [-]

Using an LLM to ask you questions about what you wrote can help you explore assumptions you are making about the reader, and can help you find what might be better written another way, or elaborated upon.

fwip 5 hours ago | parent | prev | next [-]

One problem is that it's exceedingly difficult to tell, as a reader, which scenario you have encountered.

alontorres 4 hours ago | parent [-]

This is the strongest argument against it, I think. Sometimes you can't easily tell from the output whether someone thought deeply and used AI to polish, or just prompted and published. That adds another layer of cognitive burden for parsing text which is frustrating.

But AI-generated content is here to stay, and it's only going to get harder to distinguish the two over time. At some point we probably just have to judge text on its own merits regardless of how it was produced.

Linux-Fan 2 hours ago | parent [-]

My exposure and usage of “AI” has been very limited so far. Hence that is what I am and have been doing all the time: Read the text mostly irrespective of origin.

I do note that recently, I wonder what was the point the author wanted to make more often only to then note that there are a lot of what seems to be the agreed on standard telltale signs of excessive AI usage.

Effectively there was a lot of spam before already hence in general I don't mind so much. It is interesting to see, though, that the “new spam” often gets some traction and interesting comments on HN which used to not be the case.

It also means that behind the spam layer there is possibly some interesting info the writer wanted to share and for that purpose, I imagine I'd prefer to read the unpolished/prompt input variant over the outcome. So far, I haven't seen any posts where both versions were shared to test whether this would indeed be any better, though.

lproven 5 hours ago | parent | prev [-]

You do you.

I do think there's a great deal wrong with that, and I won't read it at all.

Human can speak unto human unless there's language barrier. I am not interested in anyone's mechanically-recovered verbiage, no matter how much they massaged it.