| ▲ | abraae 2 hours ago | |
Is that counterintuitive? If I had a model trained on 10 different programming languages, including my target language, I would expect it to do better than a model trained only on my target language, simply because it has access to so much more code/algorithms/examples then my language alone. i.e. there is a lot of commonality between programming languages just as there is between human languages, so training on one language would be beneficial to competency in other languages. | ||
| ▲ | dagss 2 hours ago | parent [-] | |
> simply because it has access to so much more code/algorithms/examples then my language alone I assumed that is what was catered for with "even when controlling for the size of the training set". I.e. assuming I am reading it right: That it is better to get the same data as 25% in 4 languages, than 100% in one language. | ||