| ▲ | duped 8 hours ago |
| Using LLMs for any kind of writing is unethical, with the narrow exception of translation. If you didn't take the time to compose your words thoughtfully then you aren't owed the time to read them. |
|
| ▲ | dec0dedab0de 8 hours ago | parent | next [-] |
| There is a huge difference between using an llm and just blindly dumping it's output on someone verbatim. I think it's fine to have an llm write a first or second draft of something, then go through and reword most of it to be in your own voice. |
| |
| ▲ | oasisbob 6 hours ago | parent | next [-] | | If one is trying to avoid plagiarism, starting with an AI draft and polishing it to avoid signs of its true origins is not a good method. | |
| ▲ | r_lee 8 hours ago | parent | prev | next [-] | | at this point I really think its better to read broken english than have to read some clanker slop. it immediately makes me want to just ignore whatever text i'm reading, its just a waste of time | | |
| ▲ | runarberg 7 hours ago | parent [-] | | I do wonder, we had pretty good (by some measure of good) machine translations before LLMs. Even better, the artifacts in the old models were easily recognized as machine translation errors, and what was better, the mistranslation artifacts broke spectacularly, sometimes you could even see the source in the translation and your brain could guess the intended meaning through the error. With LLMs this is less clear, you don’t get the old school artifacts, instead you get hallucinations, and very subtle errors that completely alter the meaning while leaving the sentence intact enough that your reader might not know this is a machine translation error. | | |
| ▲ | r_lee 7 hours ago | parent [-] | | and not just artifacts/hallucinations, the worst thing about is the fact that its basically "perfect" English, perfect formatting, which makes it just look like grey slop, since it all sounds the same and its hard to distinguish between the slop articles/comments/PRs/whatever. and it will also "clean up" the text to the point where important nuances and tangents get removed/transformed into some perfect literature where it loses its meaning and/or significance |
|
| |
| ▲ | GauntletWizard 7 hours ago | parent | prev | next [-] | | The LLM presents a perverse incentive here - It is used for perceived efficiency gains, most of which would be consumed by the act of rewriting and redrafting. The alienness of the thoughts in the document is also non-condusive to this; Reading a long document about something you think you know but did not write is exhausting and mentally painful - This is why code review has such relatively poor results. Quite frankly, while having an LLM draft and rewriting it would be okay, I do not believe it is reasonable to expect that to ever happen. It will be either like high school paper plagarism (Just change around some of the sentences and rephrase it bro), or it will simply not even get that much. It is unreasonable with what we know about human psychology to expect that "Human-Rewrites of LLM drafts", at the level that the human contributes something, are maintainable and scalable; Most people psychologically can't put in that effort. | | |
| ▲ | leptons 6 hours ago | parent [-] | | >The LLM presents a perverse incentive here - It is used for perceived efficiency gains, most of which would be consumed by the act of rewriting and redrafting. It might give efficiency gains for the writer, but the reader has to read the slop and try to guess at what it was intending to communicate and weed out "hallucinations". That's a big loss of efficiency for the reader. |
| |
| ▲ | duped 6 hours ago | parent | prev [-] | | I don't think that's fine, I think that's an example of why using LLMs to write is unethical and creates no value. The purpose of written language is to express your thoughts or ideas to others. If you're synthesizing text and then refining it you're not engaging in that practice. |
|
|
| ▲ | yearolinuxdsktp 7 hours ago | parent | prev | next [-] |
| I disagree with the downvotes, but let me put it differently: if you don’t understand, have reviewed and be ready to own all of LLM output (the thoughtful part), then you aren’t owned the time to read them. If you didn’t try to reign in the verbose slop that’s the default for LLMs, I don’t want to read it. Maybe the poster is running a local LLM.. you’d think that a SOTA model would have surmised that an overnight MacOS upgrade can only be a minor version. |
|
| ▲ | eru 8 hours ago | parent | prev | next [-] |
| [flagged] |
| |
| ▲ | kibwen 4 hours ago | parent | next [-] | | Agreed, which is why I didn't bother reading this comment before downvoting it. If you think that you were owed some other behavior from me despite not paying me for it, feel free to elaborate; for example, you could acknowledge that there exists an implicit social contract when it comes to basic human communication. | |
| ▲ | 5 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | 5 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | zer00eyz 7 hours ago | parent | prev | next [-] |
| > If you didn't take the time to compose your words thoughtfully then you aren't owed the time to read them. Apply this argument to code, to art, to law, to medicine. It fails spectacularly. Blaming the tool for the failure of the person is how you get outrageous arguments that photography cant be art, that use of photoshop makes it not art... Do you blame the hammer or the nail gun when the house falls down, or is it the fault of the person who built it? If you dont know what you're doing, it isnt the tools fault. |
| |
| ▲ | abenga 7 hours ago | parent | next [-] | | I of course expect my lawyer and doctor to thoughtfully apply their knowledge to help me. Why should they be any different? | |
| ▲ | lurking_swe 5 hours ago | parent | prev | next [-] | | “compose thoughtfully” != layman terminology Lawyers thoughtfully write laws that other lawyers understand. I’m not sure why that’s confusing. | |
| ▲ | duped 6 hours ago | parent | prev [-] | | I do apply it to those, and I don't see how it "fails" at anything. Presenting synthesized words as original thought isn't using a tool, it's laziness at best. |
|
|
| ▲ | rebolek 7 hours ago | parent | prev | next [-] |
| Using LLM is perfect for writing documentation which is something I always had problems with it. |
| |
| ▲ | mort96 6 hours ago | parent | next [-] | | As someone who has dealt with projects with AI-generated documentation... I can't really say I agree. Good documentation is terse, efficiently communicating the essential details. AI output is soooooooo damn verbose. What should've been a paragraph becomes a giant markdown file. I like reading human-written documentation, but AI-slop documentation is so tedious I just bounce right off. Plus, when someone wrote the documentation, I can ask the author about details and they'll probably know since they had enough domain expertise and knowledge of the code to explain anything that might be missing. I can't trust you to know anything about the code you had an AI generate and then had an AI write documentation for. Then there's the accuracy issue. Any documentation can always be inaccurate and it can obviously get outdated with time, but at least with human-authored documentation, I can be confident that the content at some point matched a person's best understanding of the topic. With AI, no understanding is involved; it's just probabilistically generated text, we've all hopefully seen LLMs generate plausible-sounding but completely wrong text enough to somewhat doubt their output. | | |
| ▲ | brookst 6 hours ago | parent [-] | | Classic perfect/good. The choice is not usually “have humans write amazing top notch documentation, or use an LLM”. The choice is usually “have sparse, incomplete, out-of-date documentation… or use an LLM”. | | |
| |
| ▲ | duped 6 hours ago | parent | prev [-] | | This immediately invalidates a software or technical project for me. The value of documentation isn't the output alone, but the act of documenting it by a person or people that understand it. I have done a lot of technical writing in my career, and documenting things is exactly where you run into the worst design problems before they go live. |
|
|
| ▲ | wyufro 7 hours ago | parent | prev [-] |
| That's very elitist and unfair to people who previously struggled to form their words but now have a better chance at doing so. |
| |
| ▲ | bigyabai 6 hours ago | parent | next [-] | | An elitist attitude towards plagiarists is common. | | |
| ▲ | brookst 6 hours ago | parent [-] | | Also elitist attitudes towards people for whom English isn’t a native language, elitist attitudes towards people with dyslexia and other conditions that make writing difficult, and elitist attitudes towards people with lower education levels. | | |
| ▲ | eesmith 5 hours ago | parent [-] | | The BBC used to encourage its announcers to use Received Pronunciation, which was associated with high social class. The solution to this form of elitism was not to make everyone speak RP, but to encourage non-RP accents, which is more common in the modern BBC. Your comment seems elitist by encouraging the use of artifice to fit better into an elitist world, rather than breaking down elitism. |
|
| |
| ▲ | duped 6 hours ago | parent | prev [-] | | I disagree, because those aren't their words. | | |
| ▲ | brookst 6 hours ago | parent | next [-] | | Do we care about words or thoughts? Many folks are more interested in semantic meaning than character sequences. To each their own of course. | | |
| ▲ | duped 4 hours ago | parent [-] | | One problem I see with the broader use of LLMs these days is the death of literacy. For example, you chose to read my response and attack the vocabulary as if that was the point I was trying to make. This is a misunderstanding. I am purposefully reusing the word choice of the comment I'm replying to. I was trying to very concisely point out that if an LLM is generating your writing it is not your words or your thoughts that you're trying to communicate. |
| |
| ▲ | CamperBob2 6 hours ago | parent | prev [-] | | How'd you learn to write? |
|
|