| ▲ | tootyskooty 15 hours ago | |
See pretraining section of the prerelease_notes.md: https://github.com/DGoettlich/history-llms/blob/main/ranke-4... | ||
| ▲ | pests 14 hours ago | parent [-] | |
I was curious, they train a 1900 base model, then fine tune to the exact year: "To keep training expenses down, we train one checkpoint on data up to 1900, then continuously pretrain further checkpoints on 20B tokens of data 1900-${cutoff}$. " | ||