| ▲ | anishgupta 5 hours ago | |
I guess it also depends on which dataset LLM was trained on. Rare or niche languages get fragmented into more tokens even if the code itself is short. So two languages with the same number of characters can produce very different token counts because one aligns with what the model has seen millions of times and the other does not. | ||