Remix.run Logo
sebastianmestre 5 hours ago

Cool article, it got me to play around with Markov models, too! I first did a Markov model over plain characters.

> Itheve whe oiv v f vidleared ods alat akn atr. s m w bl po ar 20

Using pairs of consecutive characters (order-2 Markov model) helps, but not much:

> I hateregratics.pyth fwd-i-sed wor is wors.py < smach. I worgene arkov ment by compt the fecompultiny of 5, ithe dons

Triplets (order 3) are a bit better:

> I Fed tooks of the say, I just train. All can beconsist answer efferessiblementate

> how examples, on 13 Debian is the more M-x: Execute testeration

LLMs usually do some sort of tokenization step prior to learning parameters. So I decided to try out order-1 Markov models over text tokenized with byte pair encoding (BPE).

Trained on TFA I got this:

> I Fed by the used few 200,000 words. All comments were executabove. This value large portive comment then onstring takended to enciece of base for the see marked fewer words in the...

Then I bumped up the order to 2

> I Fed 24 Years of My Blog Posts to a Markov Model

> By Susam Pal on 13 Dec 2025

>

> Yesterday I shared a little program calle...

It just reproduced the entire article verbatim. This makes sense as BPE removes any pair of repeated tokens, making order-2 Markov transitions fully deterministic.

I've heard that in NLP applications, it's very common to run BPE only up to a certain number of different tokens, so I tried that out next.

Before limiting, BPE was generating 894 tokens. Even adding a slight limit (800) stops it from being deterministic.

> I Fed 24 years of My Blog Postly coherent. We need to be careful about not increasing the order too much. In fact, if we increase the order of the model to 5, the generated text becomes very dry and factual

It's hard to judge how coherent the text is vs the author's trigram approach because the text I'm using to initialize my model has incoherent phrases in it anyways.

Anyways, Markov models are a lot of fun!

andai 2 hours ago | parent | next [-]

Nice :) I did something similar a few days ago. What I ended up with was a 50/50 blend of hilarious nonsense, and verbatim snippets.There seemed to be a lot of chains where there was only one possible next token.

I'm considering just deleting all tokens that have only one possible descendant, from the db. I think that would solve that problem. Could increase that threshold to, e.g. a token needs to have at least 3 possible outputs.

However that's too heavy handed: there's a lot of phrases or grammatical structures that would get deleted by that. What I'm actually trying to avoid is long chains where there's only one next token. I haven't figured out how to solve that though.

vunderba an hour ago | parent [-]

That's where a dynamic n-gram comes into play. Train the markov model from 1 to 5 n-grams, and then scale according to the number of potential paths available.

You'll also need a "sort of traversal stack" so you can rewind if you get stuck several plies in.

countWSS 3 hours ago | parent | prev [-]

the trick to prevent 'dry' output that quotes verbatim is to make the 5 words limit flexible: if there is only one path, reduce it to 4.

Tallain 2 hours ago | parent [-]

I have a pet tool I use for conlang work for writing/worldbuilding that is built on Markov chains and I am smacking my forehead right now at how obvious this seems in hindsight. This is great advice, thank you.