▲ | int_19h 3 days ago | |
As someone whose native language isn't English, I disagree. SOTA models are scary good at translations, at least for some languages. They do make mistakes, but at this point it's the kind of mistake that someone who is non-native but still highly proficient in the language might make - very subtle word order issues or word choices that betray that the translator is still thinking in another language (which for LLMs almost always tends to be English because of its dominance in the training set). I also disagree that it's "not even remotely close to reaching human-like quality". I have translated large chunks of books into languages I know, and the results are often better than what commercial translators do. |