| ▲ | yellowapple 3 hours ago | |
Knowing whether or not the AI changed the meaning of what you wrote is not reliant on knowing which specific rules you broke. It's only reliant on you actually reading what the AI spat out and deciding “yes, this is what I meant” or “no, this is not what I meant”. Unless you're arguing that the rule violations are something the author intends to be part of the meaning of what one wrote? | ||
| ▲ | duskdozer 2 hours ago | parent [-] | |
>Knowing whether or not the AI changed the meaning of what you wrote is not reliant on knowing which specific rules you broke. It's only reliant on you actually reading what the AI spat out and deciding “yes, this is what I meant” or “no, this is not what I meant”. That's fair. >Unless you're arguing that the rule violations are something the author intends to be part of the meaning of what one wrote? I think what I wanted to get at is more like this: 1. I think that they may be part of the meaning 2. I think that people would be primed to accept changes even if they change the meaning 3. I suspected that it would always correct something and wouldn't just say LGTM even if the input was fine To check, and at the risk of this being hypocritical, I asked for a grammar correction on part of your post that I thought had no mistakes, and both in context and isolation, it corrected "spat out" to "produced." Now, this isn't a huge deal, but it is a loss of the connotation of "spat out," which is the phrasing you chose. I think grammatical errors are low-cost, and changes in meaning and intent are high-cost, so with 2. above, running it through an LLM risks more loss than it gains. | ||