Remix.run Logo
zozbot234 15 hours ago

They apparently pre-train with all data up to 1900 and then fine-tune with 1900-1913 data. Anyway, the amount of available content tends to increase quickly over time, as instances of content like mass literature, periodicals, newspapers etc. only really became a thing throughout the 19th and early 20th century.

mmooss 15 hours ago | parent [-]

They pre-train with all data up to 1900 and then fine-tune with 1900-1913 data.

Where does it say that? I tried to find more detail. Thanks.

tootyskooty 15 hours ago | parent [-]

See pretraining section of the prerelease_notes.md:

https://github.com/DGoettlich/history-llms/blob/main/ranke-4...

pests 14 hours ago | parent [-]

I was curious, they train a 1900 base model, then fine tune to the exact year:

"To keep training expenses down, we train one checkpoint on data up to 1900, then continuously pretrain further checkpoints on 20B tokens of data 1900-${cutoff}$. "