| ▲ | munk-a 2 hours ago | |||||||
PHP went through a similar effort a while back to just clear out places like Stackoverflow of terrible out of date advice (e.g. posts advocating magic_quotes). LLMs make this a slightly different problem because, for the most part, once the bad advice is in the model it's never going away. In theory there's an easier to test surface around how good the advice it's giving is but trying to figure out how it got to that conclusion and correct it for any future models is arcane. It's unlikely that model trainers will submit their RC models to various communities to make sure it isn't lying about those specific topics so everything needs to happen in preparation of the next generation and relying on the hope that you've identified the bad source it originally trained on and that the model will actually prioritize training on that same, now corrected, source. | ||||||||
| ▲ | miki123211 36 minutes ago | parent [-] | |||||||
This is one area where reinforcement learning can help. The way you should think of RL (both RLVR and RLHF) is the "elicitation hypothesis[1]." In pretraining, models learn their capabilities by consuming large amounts of web text. Those capabilities include producing both low and high quality outputs (as both low and high quality outputs are present in their pretraining corpora). In post training, RL doesn't teach them new skills (see E.G. the "Limits of RLVR"[2] paper). Instead, it "teaches" the models to produce the more desirable, higher-quality outputs, while suppressing the undesirable, low-quality ones. I'm pretty sure you could design an RL task that specifically teaches models to use modern idioms, either as an explicit dataset of chosen/rejected completions (where the chosen is the new way and the rejected is the old), or as a verifiable task where the reward goes down as the number of linter errors goes up. I wouldn't be surprised if frontier labs have datasets for this for some of the major languages and packages. [1] https://www.interconnects.ai/p/elicitation-theory-of-post-tr... | ||||||||
| ||||||||