| ▲ | ahaspel 9 hours ago | |||||||
Thanks! The underlying text (1911 edition) is public domain, but the structured version here — the parsing, reconstruction, and linking — is something I put together for this site. Right now there isn’t a bulk download available. I’m considering exposing structured access (API or dataset) in some form, but haven’t decided exactly how that will work yet. If you have a specific use case in mind (especially for training), I’d be interested to hear more. | ||||||||
| ▲ | hallole 8 hours ago | parent | next [-] | |||||||
I've wanted to do something like this for The Encyclopédie, a hugely relevant text to the Enlightenment. If you ever get around to adding a rough "How I (generally) Made This" section, that'd be appreciated! Site looks great :) | ||||||||
| ▲ | logicallee 9 hours ago | parent | prev [-] | |||||||
Regarding the specific use case, I was thinking this: I had Gemma 4 (a small but highly capable offline model released by Google) make a public domain cc0 encyclopedia of some core science and technology concepts[1]. I thought it was pretty good. Separately, I've fine-tuned the Gemma 4 model[2], it was very quick (just 90 seconds), so I think it could be interesting to train it to talk like 1911 Encyclopedia Britannica. I would use the entries as training data and train it to talk in the same style. There isn't a specific use case for why, I just think it would be interesting. For example, I could see how it writes about modern concepts in the style of 1911 Britannica. [1] https://stateofutopia.com/encyclopedia/ [2] To talk like a pirate! https://www.youtube.com/live/WuCxWJhrkIM | ||||||||
| ||||||||