▲ | ACCount37 5 days ago | |
Major industry players were doing that for a while now. It's just hard to actually design training regimes that give LLMs better hallucination-avoidance capabilities. And it's easy to damage the hallucination-avoidance capabilities by training an LLM wrong. As OpenAI has demonstrated when they fried the o3 with RLVR that encouraged guesswork. That "SAT test incentivizes guesswork" example they give in the article is one they had to learn for themselves the hard way. |