They have a clear incentive to do exactly as said - regurgitation is a problem, because it indicates the model failed to learn from the data, and merely memorized it.