| ▲ | cubefox an hour ago | |
Usual terminology for the three main learning paradigms: - Supervised learning (e.g. matching labels to pictures) - unsupervised learning / self-supervised learning (pretraining) - reinforcement learning Now the confusing thing is that Dwarkesh Patel instead calls pretraining "supervised learning" and you call reinforcement learning a form of unsupervised learning. | ||
| ▲ | pavvell 13 minutes ago | parent | next [-] | |
SL and SSL are very similar "algorithmically": both use gradient descent on a loss function of predicting labels, human-provided (SL) or auto-generated (SSL). Since LLMs are pretrained on human texts, you might say that the labels (i.e., next token to predict) were in fact human provided. So, I see how pretraining LLMs blurs the line between SL and SSL. In modern RL, we also train deep nets on some (often non trivial) loss function. And RL is generating its training data. Hence, it blurs the line with SSL. I'd say, however, it's more complex and more computationally expensive. You need many / long rollouts to find a signal to learn from. All of this process is automated. So, from this perspective, it blurs the line with UL too :-) Though it dependence on the reward is what makes the difference. Overall, going from more structured to less structured, I'd order the learning approaches: SL, SSL (pretraining), RL, UL. | ||
| ▲ | thegeomaster 39 minutes ago | parent | prev [-] | |
You could think of supervised learning as learning against a known ground truth, which pretraining certainly is. | ||