| ▲ | mort96 8 hours ago |
| Why would you want to use a chat bot to translate? Either you know the source and destination language, in which case you'll almost certainly do a better job (certainly a more trustworthy job), or you don't, in which case you shouldn't be handling translations for that language anyway. Same with grammar fixes. If you don't know the language, why are you submitting grammar changes?? |
|
| ▲ | denkmoon 8 hours ago | parent | next [-] |
| For translating communications like "Here is my PR, it does x, can you please review it", not localisation of the app. |
|
| ▲ | MarsIronPI 8 hours ago | parent | prev [-] |
| No, I think GP means grammar fixes to your own communication. For example if I don't speak Japanese very well and I want to write to you in Japanese, I might write you a message in Japanese, then ask an LLM to fix up my grammar and check my writing to make sure I'm not sounding like a complete idiot. |
| |
| ▲ | mort96 8 hours ago | parent [-] | | I have read a lot of bad grammar from people who aren't very good at the language but are trying their best. It's fine. Just try to express yourself clearly and we figure it out. I have read text where people who aren't very good at the language try to "fix it up" by feeding it through a chat bot. It's horrible. It's incredibly obvious that they didn't write the text, the tone is totally off, it's full of obnoxious ChatGPT-isms, etc. Just do your best. It's fine. Don't subject your collaborators to shitty chat bot output. | | |
| ▲ | habinero 7 hours ago | parent | next [-] | | Agreed. Humans are insanely good at figuring out intent and context, and running stuff through an LLM breaks that. The times I've had to communicate IRL in a language I don't speak well, I do my best to speak slowly and enunciate and trust they'll try their best to figure it out. It's usually pretty obvious what you're asking lol. (Also a lot of people just reply with "Can I help you?" in English lol) I've occasionally had to email sites in languages I don't speak (to tell them about malware or whatever) and I write up a message in the simplest, most basic English I can. I run that through machine translation that starts out with "This was generated by Google Translate" and include both in the email. Just do your best to communicate intent and meaning, and don't worry about sounding like an idiot. | | |
| ▲ | adastra22 6 hours ago | parent [-] | | > Humans are insanely good at figuring out intent and context I wish that was true. |
| |
| ▲ | pessimizer 8 hours ago | parent | prev [-] | | You seem to be judging business communications by weird middle-class aesthetics while the people writing the emails are just trying to be clear. If you think that every language level is always sufficient for every task (a fluency truther?), then you should agree that somebody who writes an email in a language that they are not confident in, puts it through an LLM, and decides the results better explain the idea they were trying to convey than they had managed to do is always correct in that assessment. Why are you second guessing them and indirectly criticizing their language skills? | | |
| ▲ | mort96 8 hours ago | parent [-] | | Running your words through ChatGPT isn't making you clear. If your own words are clear enough to be understood by ChatGPT, they're clear enough to be understood by your peers. Adding ChatGPT into the mix only ensures opportunity for meaning to be mangled. And text that's bad enough as to be ambiguous may be translated to perfectly clear text that reflects the wrong interpretation of your words, risking misunderstandings that wouldn't happen if the ambiguity was preserved instead of eliminated. I have no idea what you're talking about with regard to being a "fluency truther", I think you're putting words into my mouth. | | |
| ▲ | pixl97 7 hours ago | parent [-] | | Eh, na dawg, I'll have to reject a lot of what you've typed here. LLMs can do a lot of proof checking on what you've written. Asking it to check for logical contradictions in what I've stated and such. It will catch were I've forgot things like a 'not' in one statement so one sentence is giving a negative response and another gives a positive response unintentionally. This kind of error is quite often hard for me to pick up on, yet the LLM seems to do well. |
|
|
|
|