Remix.run Logo
overfeed a day ago

> Basically an LLM translating across languages will "light up" for the same concepts across languages

Which is exactly what they are trained to do. Translation models wouldn't be functional if they are unable to correlate an input to specific outputs. That some hiddel-layer neurons fire for the same concept shouldn't come as a surprise, and is a basic feature required for the core functionality.

balder1991 a day ago | parent [-]

And if it is true that the language is just the last step after the answer is already conceptualized, why do models perform differently in different languages? If it was just a matter of language, they’d have the same answer but just with a broken grammar, no?

kaibee a day ago | parent [-]

If you suddenly had to do all your mental math in base-7, do you think you'd be just as fast and accurate as you are at math in base-10? Is that because you don't have an internal world-model of mathematics? or is it because language and world-model are dependently linked?