| ▲ | nl 2 days ago | |
This is simply not true. Modern LLMs are trained by reinforcement learning where they try to solve a coding problem and receive a reward if it succeeds. Data Processing Inequalities (from your link) aren't relevant: the model is learning from the reinforcement signal, not from human-written code. | ||
| ▲ | jacquesm 2 days ago | parent [-] | |
Ok, then we can leave the training data out of the input, everybody happy. | ||