| ▲ | JackFr 4 days ago |
| > As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore. What if they are a very limited English speaker, using the AI to tighten up their responses into grammatical, idiomatic English? |
|
| ▲ | baobun 4 days ago | parent | next [-] |
| I'd rather have broken grammar and an honest and useful meta-signal than botched semantics. Also that better not be a sensitive conversation or contain personal details or business internals of others... Just don't. |
| |
| ▲ | NewsaHackO 3 days ago | parent [-] | | But the meta singal you get is detrimental to the writer, so why wouldn't they want to mask it? | | |
| ▲ | habinero 3 days ago | parent | next [-] | | If I think you're fluent, I might think you're an idiot when really you just don't understand. If I know they struggle with English, I can simplify my vocabulary, speak slower/enunciate, and check in occasionally to make sure I'm communicating in a way they can follow. | | |
| ▲ | NewsaHackO 3 days ago | parent [-] | | Both of those options are exactly what the writer wants to avoid though, and the reason they are using AI for grammar correction in the first place. | | |
| |
| ▲ | baobun 3 days ago | parent | prev [-] | | Security and ethics. If those don't apply, as mentioned, if I realize I will as mentioned also ignore them if I can and judge their future communications as malicious, incompetent, inconsiderate, and/or meaningless. | | |
| ▲ | NewsaHackO 3 days ago | parent [-] | | But if they are using it for copywriting/grammar edits, how would you know? For instance, have I used AI to help correct grammar for these repilies? |
|
|
|
|
| ▲ | natebc 3 days ago | parent | prev [-] |
| I'd rather have words from a humans mind full stop. |