| ▲ | runarberg 6 hours ago | |
I do wonder, we had pretty good (by some measure of good) machine translations before LLMs. Even better, the artifacts in the old models were easily recognized as machine translation errors, and what was better, the mistranslation artifacts broke spectacularly, sometimes you could even see the source in the translation and your brain could guess the intended meaning through the error. With LLMs this is less clear, you don’t get the old school artifacts, instead you get hallucinations, and very subtle errors that completely alter the meaning while leaving the sentence intact enough that your reader might not know this is a machine translation error. | ||
| ▲ | r_lee 5 hours ago | parent [-] | |
and not just artifacts/hallucinations, the worst thing about is the fact that its basically "perfect" English, perfect formatting, which makes it just look like grey slop, since it all sounds the same and its hard to distinguish between the slop articles/comments/PRs/whatever. and it will also "clean up" the text to the point where important nuances and tangents get removed/transformed into some perfect literature where it loses its meaning and/or significance | ||