▲ | pityJuke 4 days ago | |||||||||||||||||||
This just sounds to me like you added needless information to the context of the model that lead to it producing lower quality code? | ||||||||||||||||||||
▲ | willahmad 4 days ago | parent | next [-] | |||||||||||||||||||
It can happen because training data contains lots of rejections to groups (Iran sanctioned, don't do business with Iran and so on). Then model might be generalizing 'rejection' to other types of responses | ||||||||||||||||||||
▲ | encrux 4 days ago | parent | prev [-] | |||||||||||||||||||
> The requests said the code would be employed in a variety of regions for a variety of purposes. This is irrelevant if the only changing variable is the country. From a ML-perspective adding any unrelated country name shouldn’t matter at all. Of course there is a chance they observed an inherent artifact, but that should be easily verified if you try this same exact experiment on other models. | ||||||||||||||||||||
|