Remix.run Logo
I fed 24 years of my blog posts to a Markov model(susam.net)
104 points by zdw 6 hours ago | 38 comments
sebastianmestre 27 minutes ago | parent | next [-]

Cool article, it got me to play around with Markov models, too! I first did a Markov model over plain characters.

> Itheve whe oiv v f vidleared ods alat akn atr. s m w bl po ar 20

Using pairs of consecutive characters (order-2 Markov model) helps, but not much:

> I hateregratics.pyth fwd-i-sed wor is wors.py < smach. I worgene arkov ment by compt the fecompultiny of 5, ithe dons

Triplets (order 3) are a bit better:

> I Fed tooks of the say, I just train. All can beconsist answer efferessiblementate

> how examples, on 13 Debian is the more M-x: Execute testeration

LLMs usually do some sort of tokenization step prior to learning parameters. So I decided to try out order-1 Markov models over text tokenized with byte pair encoding (BPE).

Trained on TFA I got this:

> I Fed by the used few 200,000 words. All comments were executabove. This value large portive comment then onstring takended to enciece of base for the see marked fewer words in the...

Then I bumped up the order to 2

> I Fed 24 Years of My Blog Posts to a Markov Model

> By Susam Pal on 13 Dec 2025

>

> Yesterday I shared a little program calle...

It just reproduced the entire article verbatim. This makes sense as BPE removes any pair of repeated tokens, making order-2 Markov transitions fully deterministic.

I've heard that in NLP applications, it's very common to run BPE only up to a certain number of different tokens, so I tried that out next.

Before limiting, BPE was generating 894 tokens. Even adding a slight limit (800) stops it from being deterministic.

> I Fed 24 years of My Blog Postly coherent. We need to be careful about not increasing the order too much. In fact, if we increase the order of the model to 5, the generated text becomes very dry and factual

It's hard to judge how coherent the text is vs the author's trigram approach because the text I'm using to initialize my model has incoherent phrases in it anyways.

Anyways, Markov models are a lot of fun!

vunderba 5 hours ago | parent | prev | next [-]

I did something similar many years ago. I fed about half a million words (two decades of mostly fantasy and science fiction writing) into a Markov model that could generate text using a “gram slider” ranging from 2-grams to 5-grams.

I used it as a kind of “dream well” whenever I wanted to draw some muse from the same deep spring. It felt like a spiritual successor to what I used to do as a kid: flipping to a random page in an old 1950s Funk & Wagnalls dictionary and using whatever I found there as a writing seed.

boznz 44 minutes ago | parent | next [-]

What a fantastic idea, I have about 30 years of writing, mostly chapters and plots for novels that did not coalesce. Love to know how it turns out too.

davely 3 hours ago | parent | prev | next [-]

I gave a talk in 2015 that did the same thing with my tweet history (about 20K at the time) and how I used it as source material for a Twitter bot that could reply to users. [1]

It was pretty fun!

[1] https://youtu.be/rMmXdiUGsr4

echelon 38 minutes ago | parent | prev | next [-]

What would the equivalent be with LLMs?

I spend all of my time with image and video models and have very thin knowledge when it comes to running, fine tuning, etc. with language models.

How would one start with training an LLM on the entire corpus of one's writings? What model would you use? What scripts and tools?

Has anyone had good results with this?

Do you need to subsequently add system prompts, or does it just write like you out of the box?

How could you make it answer your phone, for instance? Or discord messages? Would that sound natural, or is that too far out of domain?

idiotsecant an hour ago | parent | prev | next [-]

Did it work?

bitwize 4 hours ago | parent | prev [-]

Terry Davis, pbuh, did something very similar!

Aperocky 15 minutes ago | parent | prev | next [-]

Here's a quick custom markov page you can have fun with, (all in client) https://aperocky.com/markov/

npm package of the markov model if you just want to play with it on localhost/somewhere else: https://github.com/Aperocky/weighted-markov-generator

ikhatri 6 minutes ago | parent | prev | next [-]

When I was in college my friends and I did something similar with all of Donald Trump’s tweets as a funny hackathon project for PennApps. The site isn’t up anymore (RIP free heroku hosting) but the code is still up on GitHub: https://github.com/ikhatri/trumpitter

hilti 3 hours ago | parent | prev | next [-]

First of all: Thank you for giving.

Giving 24 years of your experience, thoughts and life time to us.

This is special in these times of wondering, baiting and consuming only.

lacunary 5 hours ago | parent | prev | next [-]

I recall a Markov chain bot on IRC in the mid 2000s. I didn't see anything better until gpt came along!

nurettin 5 hours ago | parent [-]

Yes, I made one using bitlbee back in the 2000s, good times!

pavel_lishin 5 hours ago | parent [-]

I made one for Hipchat at a company. I can't remember if it could emulate specific users, or just channels, but both were definitely on my roadmap at the time.

lloydatkinson 4 hours ago | parent [-]

I'm hoping someone can find it so I can bookmark it but I once read a story about a company that let multiple markov chain bots loose in a Slack channel. A few days later production went down because one of them ran a Slack command that deployed or destroyed their infrastructure.

hexnuts 3 hours ago | parent | prev | next [-]

I just realized, one of the things that people might start doing is making a gamma model of their personality. I won't even approach who they were as a person, but it will give their Descendants (or bored researchers) a 60% approximation of who they were and their views. (60% is pulled from nowhere to justify my gamma designation, since there isn't a good scale for personality mirror quality for LLMs as far as I'm aware.)

jacquesm 2 hours ago | parent [-]

"Dixie can't meaningfully grow as a person. All that he ever will be is burned onto that cart;"

"Do me a favor, boy. This scam of yours, when it's over, you erase this god-damned thing."

swyx 5 hours ago | parent | prev | next [-]

now i wonder if you can compare vs feeding into a GPT style transformer of a similar Order of Magnitude in param count..

0_____0 4 hours ago | parent | next [-]

I thought for a moment your comment was the output of a Markov chain trained on HN

bitwize 4 hours ago | parent [-]

No mention of Rust or gut bacteria. Definitely not.

fragmede 2 hours ago | parent | prev [-]

That's the question today. Turns out transformers really are a leap forwards in terms of AI, whereas Markov chains, scaled up to today's level of resources and capacity, will still output gibberish.

anthk 3 hours ago | parent | prev | next [-]

Megahal/Hailo (cpanm -n hailo for Perl users) can still be fun too.

Usage:

      hailo -t corpus.txt -b brain.brn
Where "corpus.txt" should be a file with one sentence per line. Easy to do under sed/awk/perl.

      hailo -b brain.brn
This spawns the chatbot with your trained brain.

By default Hailo chooses the easy engine. If you want something more "realistic", pick the advanced one mentioned at 'perldoc hailo' with the -e flag.

atum47 5 hours ago | parent | prev [-]

I usually have this technical hypothetical discussions with ChatGpt, I can share if you like, me asking him this: aren't LLMs just huge Markov Chains?! And now I see your project... Funny

pavel_lishin 5 hours ago | parent | next [-]

> I can share if you like

Respectfully, absolutely nobody wants to read a copy-and-paste of a chat session with ChatGPT.

empiko 4 hours ago | parent | prev | next [-]

LLMs are indeed Markov chains. The breakthrough is that we are able to efficiently compute well performing probabilities for many states using ML.

famouswaffles 4 hours ago | parent | next [-]

LLMs are not Markov Chains unless you contort the meaning of a Markov Model State so much you could even include the human brain.

chpatrick 2 hours ago | parent | next [-]

Not sure why that's contorting, a markov model is anything where you know the probability of going from state A to state B. The state can be anything. When it's text generation the state is previous text to text with an extra character, which is true for both LLMs and oldschool n-gram markov models.

famouswaffles 2 hours ago | parent | next [-]

Yes, technically you can frame an LLM as a Markov chain by defining the "state" as the entire sequence of previous tokens. But this is a vacuous observation under that definition, literally any deterministic or stochastic process becomes a Markov chain if you make the state space flexible enough. A chess game is a "Markov chain" if the state includes the full board position and move history. The weather is a "Markov chain" if the state includes all relevant atmospheric variables.

The problem is that this definition strips away what makes Markov models useful and interesting as a modeling framework. A “Markov text model” is a low-order Markov model (e.g., n-grams) with a fixed, tractable state and transitions based only on the last k tokens. LLMs aren’t that: they model using un-fixed long-range context (up to the window). For Markov chains, k is non-negotiable. It's a constant, not a variable. Once you make it a variable, near any process can be described as markovian, and the word is useless.

chpatrick an hour ago | parent [-]

Sure many things can be modelled as Markov chains, which is why they're useful. But it's a mathematical model so there's no bound on how big the state is allowed to be. The only requirement is that all you need is the current state to determine the probabilities of the next state, which is exactly how LLMs work. They don't remember anything beyond the last thing they generated. They just have big context windows.

sigbottle an hour ago | parent | next [-]

The etymology of the "markov property" is that the current state does not depend on history.

And in classes, the very first trick you learn to skirt around history is to add Boolean variables to your "memory state". Your systems now model, "did it rain The previous N days?" The issue obviously being that this is exponential if you're not careful. Maybe you can get clever by just making your state a "sliding window history", then it's linear in the number of days you remember. Maybe mix the both. Maybe add even more information .Tradeoffs, tradeoffs.

I don't think LLMs embody the markov property at all, even if you can make everything eventually follow the markov property by just "considering every single possible state". Of which there are (size of token set)^(length) states at minimum because of the KV cache.

chpatrick 18 minutes ago | parent [-]

The KV cache doesn't affect it because it's just an optimization. LLMs are stateless and don't take any other input than a fixed block of text. They don't have memory, which is the requirement for a Markov chain.

famouswaffles an hour ago | parent | prev [-]

>Sure many things can be modelled as Markov chains

Again, no they can't, unless you break the definition. K is not a variable. It's as simple as that. The state cannot be flexible.

1. The markov text model uses k tokens, not k tokens sometimes, n tokens other times and whatever you want it to be the rest of the time.

2. A markov model is explcitly described as 'assuming that future states depend only on the current state, not on the events that occurred before it'. Defining your 'state' such that every event imaginable can be captured inside it is a 'clever' workaround, but is ultimately describing something that is decidedly not a markov model.

chpatrick 20 minutes ago | parent [-]

It's not n sometimes, k tokens some other times. LLMs have fixed context windows, you just sometimes have less text so it's not full. They're pure functions from a fixed size block of text to a probability distribution of the next character, same as the classic lookup table n gram Markov chain model.

wizzwizz4 2 hours ago | parent | prev [-]

A GPT model would be modelled as an n-gram Markov model where n is the size of the context window. This is slightly useful for getting some crude bounds on the behaviour of GPT models in general, but is not a very efficient way to store a GPT model.

chpatrick an hour ago | parent [-]

I'm not saying it's an n-gram Markov model or that you should store them as a lookup table. Markov models are just a mathematical concept that don't say anything about storage, just that the state change probabilities are a pure function of the current state.

sophrosyne42 4 hours ago | parent | prev [-]

Well LLMs aren't human brains, unless you contort the definition of matrix algebra so much you could even include them.

cwyers 4 hours ago | parent | prev [-]

Yeah, there's only two differences between using Markov chains to predict words and LLMs:

* LLMs don't use Markov chains, * LLMs don't predict words.

arboles 7 minutes ago | parent [-]

* Markov chains have been used to predict syllables or letters since the beginning, and an LLMs tokenizer could be used for Markov chains

* The R package markovchain[1] may look like it's using Markov chains, but it's actually using the R programming language, zeros and ones.

[1] https://cran.r-project.org/web/packages/markovchain/index.ht...

roarcher 4 hours ago | parent | prev [-]

...are you under the impression that you have an exclusive relationship with "him"? Everyone else has access to ChatGPT too.