| ▲ | cedmans 4 hours ago | |||||||
Brazen usage of LLM output is a disrespect to the target audience to begin with. If I'm being expected to employ the mental capital needed to understand the context and content of your writings, I at the very least expect that you did the same when actually authoring it. | ||||||||
| ▲ | solid_fuel 3 hours ago | parent | next [-] | |||||||
It also feels like using one of those cereal encoder wheels, to some degree. If someone sends me 10 paragraphs of output from chatGPT, and they only wrote a sentence to prompt it, then the output is really just a re-encoding of the information in the original prompt. Quite literally - if they sent me the text of the prompt I could obtain the same output, so the output is just a more verbose way of stating the prompt. I find it really disrespectful to talk to people through an LLM like that. | ||||||||
| ||||||||
| ▲ | blks 3 hours ago | parent | prev [-] | |||||||
Exactly the same argument can and should be applied to generated code | ||||||||