| ▲ | pxc 6 hours ago | |
For short texts, the translation I usually want the most is fast translation, and local models are actually great for this. But for high-ish quality translations of substantive texts, you typically want a harness that's pretty different from Claude Code. You want a glossary of technical terms or special names, a structured summary of the wider context, a concise style guide, and you have to chop the text into chunks to ensure nothing is missed. Even with super long context models, if you ask them to translate much at once they just translate an initial portion of it and crap out. Are you using it for localization or short strings of text in an app? I wonder what you can do to get better results out of smaller models. I'm confident there's a way. | ||
| ▲ | holoduke 2 hours ago | parent [-] | |
Yea. I agree. In our case we are creating short news articles of max 3 or 4 paragraphs. The texts are translated in multiple passes into various languages. We use a simple system prompt that instructs the llm to ensure simple authentic language output. With Opus we get seriously good results. The goal is not literal translation, but good translations. I tried hoiku for a while, but its not good in many languages. Sonnet is okaish, but not good enough. | ||