|
| ▲ | dataflow 3 days ago | parent | next [-] |
| Wow, this is the first time I'm hearing such a thing. For clarity: I pasted the output so a ton of people wouldn't repeat the same question to ChatGPT and burn a ton of CO2 to get the same answer. I didn't paste the query since I didn't find it interesting. And I didn't fact check because I didn't have the time. I was walking and had a few seconds to just do this on my phone. Not sure how this was rude, I certainly didn't intend it to be... |
| |
|
| ▲ | drsopp 3 days ago | parent | prev | next [-] |
| Why? |
| |
| ▲ | danieldk 3 days ago | parent [-] | | Because it is terribly low-effort. People are here for interesting and insightful discussions with other humans. If they were interested in unverified LLM output… they would ask an LLM? | | |
| ▲ | drsopp 3 days ago | parent [-] | | Who cares if it is low effort? I got lots of upvotes for my link to Claude about this, and pncnmnp seems happy. The downvoted comment from ChatGPT was maybe a bit spammy? | | |
| ▲ | lcnPylGDnU4H9OF 3 days ago | parent | next [-] | | > Who cares if it is low effort? It's a weird thing to wonder after so many people expressed their dislike of the upthread low-effort comment with a down vote (and then another voiced a more explicit opinion). The point is that a reader may want to know that the text they're reading is something a human took the time to write themselves. That fact is what makes it valuable. > pncnmnp seems happy They just haven't commented. There is no reason to attribute this specific motive to that fact. | | |
| ▲ | drsopp 3 days ago | parent [-] | | > The point is that a reader may want to know that the text they're reading is something a human took the time to write themselves. The reader may also simply want information that helps them. > They just haven't commented. Yes, they did. | | |
| ▲ | Dylan16807 3 days ago | parent [-] | | > The reader may also simply want information that helps them. The reader will generally want at least a cursory verification that it is information that helps, which dataflow didn't try to do. Especially when you're looking for specific documents and you don't check if the documents are real. (dataflow's third one doesn't appear to be.) | | |
|
| |
| ▲ | bee_rider 3 days ago | parent | prev [-] | | Yours was a little bit more useful, it you essentially used the LLM as a search engine to find a real article, right? Directly posting the random text generated by the LLM is more annoying. I mean, they didn’t even vouch or it or verify that it was right. |
|
|
|
|
| ▲ | aeonik 3 days ago | parent | prev [-] |
| I don't think it's rude, it saves me from having to come up with my own prompt and wade through the back and forth to get useful insight from the LLMs, also saves me from spending my tokens. Also, I quite love it when people clearly demarcate which part of their content came from an LLM, and specifies which model. The little citation carries a huge amount of useful information. The folks who don't like AI should like it too, as they can easily filter the content. |