| ▲ | rockinghigh 2 days ago | |
They add new data to the existing base model via continuous pre-training. You save on pre-training, the next token prediction task, but still have to re-run mid and post training stages like context length extension, supervised fine tuning, reinforcement learning, safety alignment ... | ||
| ▲ | astrange 2 days ago | parent [-] | |
Continuous pretraining has issues because it starts forgetting the older stuff. There is some research into other approaches. | ||