Remix.run Logo
walrus01 8 hours ago

I think that one could also take a much larger model (35B or 122B sized) and give it a thorough system prompt to only speak in the manner of a well educated Victorian/Edwardian era gentleman, if you want an "old timey" LLM.

fwipsy 5 hours ago | parent | next [-]

It's hard to know how accurate that is. Is the LLM truly imitating text from that era, or is it imitating a modern idea of text from that era? Also, safety/alignment training would probably prevent it from embracing many of the ideas from that era, even in roleplay.

walrus01 2 hours ago | parent [-]

There's 'uncensored' versions of Qwen 3.6 35B at Q6 and Q8 quantization levels (somewhere from 28GB to 40GB on disk as GGUF files) out there now that won't refuse any prompt. Imitating a Victorian era person is very tame compared to what you can get it to output.

zellyn 8 hours ago | parent | prev [-]

As we learn how to train smarter models on less data, it’ll become more and more interesting to see whether models like this can invent post-1930 math, science, etc. and make predictions.

[Edit: serves me right for not reading tfa. My points are well-covered]