| ▲ | int_19h 3 hours ago | |
Not really. I have observed the same thing in Russian, and no, it's not for expressions that are translated literally. Having said that, SOTA models got much better at this kinda stuff. They're quite able to write in a way that is indistinguishable from a native speaker, colloquialisms and all, with the right prompting. But SOTA models are also expensive. Most automated translations are done with something way cheaper and worse. | ||
| ▲ | Jach an hour ago | parent [-] | |
That's basically my complaint about all these AI translations being shoved around in places. If they were better, I'd complain less. I suppose they'll get better eventually, but that's not really guaranteed. Google Translate is still somehow terrible and has stagnated for a long time -- things got a little bit better with DeepL but now SOTA LLMs (e.g. ChatGPT) are quite good (even if they still have quite some ways to go) and so far beyond the old stuff, and allow you to actually discuss the translation itself with nuances kept/lost rather than just a straight one-shot A to B. | ||