▲ | jasonjmcghee 5 days ago | ||||||||||||||||
Conceptually, it's effectively a GAN | |||||||||||||||||
▲ | frumiousirc 5 days ago | parent | next [-] | ||||||||||||||||
My initial thought as well. But, what is the "Discriminator" here? What grounds the training toward reality? The "Challenger" and "Solver" adversity alone can only serve to amplify hallucination. Ahh, GPT-4o is the arbiter. So, basically, this is a way to perform LLM model compression (GPT-4o to qwen3) while maximizing the in-distribution domain size. As such, it seems reasonable and useful. However the reliance on an arbiter LLM makes the claim that it will overcome the problem of a lack of training data unreasonable. Once the target LLM is scaled up to reach the in-distribution domain size of the arbiter, it seems to me it will turn back into a hallucination amplifier. | |||||||||||||||||
| |||||||||||||||||
▲ | magicalhippo 5 days ago | parent | prev | next [-] | ||||||||||||||||
For those not in the know, that's Generative Adversarial Networks[1], where two neural networks are trained in a competitive way. One network typically generates tasks for the other, and is rewarded if it manages to make the other network fail the task. The other network is rewarded if it successfully completes the task. Thus the adversarial network tries to find weaknesses to exploit, and the combined training makes the solving network much stronger. Or at least that's the idea. [1]: https://en.wikipedia.org/wiki/Generative_adversarial_network | |||||||||||||||||
▲ | torginus 5 days ago | parent | prev [-] | ||||||||||||||||
GAN's are a supervised training method, not really self-improving (after converging to being able to reproduce the training set). |